A cool new way to think about servers
Theo - t3․gg
Channel
Interviewed Person
Theo Browne (t3dotgg)
Description
Fluid compute is here! It's pretty good. There are catches though. Serverless has limitations, and I'm hyped that we're finally starting to move past them. Thank you Blacksmith for sponsoring! Check them out at https://soydev.link/blacksmith SOURCES https://x.com/vercel/status/1886829534667735136 https://vercel.com/fluid Check out my Twitch, Twitter, Discord more at https://t3.gg S/O Ph4se0n3 for the awesome edit 🙏
Tags
Transcript
it's been a while since I was this excited about servers yes servers not serverless because vel's new fluid compute thing is actually really cool and there's a lot we can all learn from it whether or not we are using versel itself I have a lot to talk about here from the history from how we move from Lambda over to servers again why versel built this and all of the AI stuff that led to us getting here in the first place Believe It or Not AI is a huge part of why this new compute primitive is necessary and once you understand how it works and the incentives that got us here it might make a lot of sense to you
even if you don't end up using it there's a lot we can all learn from this new model and I'm so excited to dive in but since weell doesn't pay us anymore quick word from today's sponsor if you like shipping slowly and taking your time today's sponsors not for you for everyone else you should definitely check out blacksmith they'll make your builds two times faster or possibly even more and they've done it with some really cool crazy hacks we talk about how easy is to set up it's literally just this one line of code change but what really matters is the things they did to make it so fast they found a fun hack that I'm surprised more server companies haven't they're using gaming CPUs gaming CPUs are much better at
single threaded performance which makes them much better for everything from Docker builds to actually compiling your code and these numbers aren't theoretical they're showing real wins in real projects post Hog's Docker builds went from 8 minutes to a minute 27 seconds nodes went from 3 hours to a little bit under two and both saw massive reductions and cost as well it's important to note this isn't just for JavaScript obviously node is built in C++ but they have custom caches for pretty much every language you would reasonably use from go to Ruby to python to even Zig and they'll do anything for performance they are building their own
storage architecture that's up to four times faster and their cash is way bigger to 25 gigs instead of gith hubs measle 10 if you think this is too good to be true I get it thankfully they offer you 3,000 minutes for free no credit card required go set it up today tell them I sent you thanks again to blacksmith for sponsoring check them out today at S of. link blacksmith before we get to fluid compute we need to talk a little bit about the stages of compute that we had before first we had traditional dedicated servers hopefully we all understand how these work you have some software sometimes an image of
software like a Docker image that is being hosted on a VM or a computer dedicated to whatever you're doing so dedicated servers are actual computer or VM on one always on handles traffic in parallel needs to be scaled manually this is the key part I'd say these two are the key things to know the dedicated server by default is always on if it isn't then you're going to have to wait
some time for it to be provisioned because if I don't know you want something like preview deploys if I go to the T3 chat repo and find some random branch in here you'll see that all of these have a really useful preview link where I can click visit preview and this will bring me to a temp URL that is just this Branch hosted so that we can go play with this new build that Mark just did these things are great if you have a model of compute where you don't need a dedicated server for every version of your app but if you're using
traditional dedicated servers that means every single Branch needs to have its own box dedicated to it it also means if you have more users than you expect you need to spin up more boxes and if you have less users you need to either move to a cheaper box or spin it down the fixed cost model of dedicated servers kind of sucks if your workloads aren't fixed too if you have a traffic pattern like I don't know during the day you have a ton of traffic so we start the day with a lot of users but then at night it goes down a whole bunch and then once the next day comes it starts again and we have a server that's
capable of this much traffic or we can double the performance to a more expensive server that goes here what are your options because this sucks either you pay for this server all the time try and guess when the traffic is going to go up and move to this knowing that some users are going to hit an error if you don't scale properly or do you just always pay for this more expensive one even though half the time your traffic isn't close this isn't great and that's
Video Details
- Duration
- 37:09
- Published
- February 13, 2025
- Channel
- Theo - t3․gg
- Language
- ENGLISH
- Views
- 56,770
- Likes
- 1,714
Related Videos

All Things React Native and Navigation with Fernando Rojo
Theo - t3․gg
Interviewed: Fernando Rojo

1.36 - Reviewing Next.js 11 features with Tim Neutkens
CodingCatDev
Interviewed: Tim Neutkens

Tim Neutkens - An introduction to Next.js and what's to come in 2021
JSWORLD Conference
Interviewed: Tim Neutkens

Next for Next.js: See the powerful new features - Tim Neutkens
React Conferences by GitNation
Interviewed: Tim Neutkens