What is serverless #2 | Scaling your application | Prismic
Prismic
Channel
Interviewed Person
Guillermo Rauch
Description
Sadek and Guillermo discuss the scalability of serverless applications. More videos on what is serverless: https://www.youtube.com/playlist?list=PLUVZjQltoA3zXZ1ImqgO3ImQH4IFOYHcq --- As a developer, you should build websites using your favorite Jamstack framework. Prismic allows you to build website sections, that you can connect to a website builder for your client or team. They will create pages from there and you get that content back to your code through our fast API. ► [Tutorial] Build a full website with Next.js 13, Prismic, Tailwind and Typescript: https://youtu.be/nfZu56KsK_Q ► [Tutorial] Build a full website with Nuxt 3 and Prismic's new Page Builder: https://youtu.be/8GmfcbuYOWE ► [Starters] Try Slice Machine on Nuxt: https://prismic.club/nuxt-starters ► [Starters] Try Slice Machine on Next.js: https://prismic.club/nextjs-starters ► [Learn more about Slice Machine]: https://prismic.club/slice-machine --- ► Find us also on: Twitter: https://twitter.com/prismicio Instagram: https://www.instagram.com/prismicio LinkedIn: https://www.linkedin.com/company/prismic-io ► [Who are we?] : Prismic is a headless Website Builder, for Next.js and Nuxt.js developers. --- 00:00 Intro 00:16 Why serverless is gets usually associated with scalability 01:32 What would happen with a serverfull model 02:42 Serverless functions 03:01 The challenge of initial loading time 05:40 Next.js multipage website deployment #Prismic #Serverless
Tags
Transcript
- Guillermo, so in the first video we talked, we introduced a bit serverless. You talked about what is it, and I thought we now can talk about scalability of serverless and why is it scalable. - Yeah, I mean that's one of the things that always gets associated with serverless. It's a more scalable paradigm. So I think it has to do with a couple things. The first one is the concurrency model of serverless.
So it's this idea that, going back to the fundamental principle, when a request comes in, your code gets executed and you're not writing the code for the server behind it. So you're writing only the code for when the request comes in. So, if a request comes in, a function is instantiated and the code is run. If at the very same time, another request comes in- So, because it's an API or you're doing HTML rendering
or whatever, if a request comes in at the same exact time, the two requests are not going to go to the same instance of your function. Concurrently, a new function instance is going to be created. So, what this does, is that the resources allocation are completely independent for each one of your incoming requests and your incoming customers. - Or at least if people develop that way,
you can make them completely independent? - Correct. So, with servers, what would happen, and we've seen this countless of times, is let's say that you create a server to do something that uses a lot of CPU, like convert an image from JPEG to PNG, the developer can be very successful with a serverfull model, because you create a server, like, for example, express.js or any other framework for servers and you create your routes and your route
invokes ImageMagick, for example, for converting the image. So, and you test it, and you're like Oh, like I boot it up, boot up my server, localhost:3000 - Works on my machine. - It works correctly on my machine. And it converts the image. Great. And then you launch it. And maybe with as few as three or four requests, you didn't contemplate that your server resources are now shared between all your incoming requests. And I think when you're dealing with databases, you may not hit this problem sometimes,
until you get more load, or you might not hit it immediately, but with things that are a little bit more CPU-heavy, the problem hits you kind of immediately. - Right. - Your server just breaks down entirely. With serverless functions, what ends up happening is, whether you have a lot of traffic or not, you have a concurrency model that always scales, because each invocation is independent of one another when they're happening at the same exact time. - But the problem here is does it mean that
you have to start up the function each time you get the request? - Yes. - Which is loading time, right? - Yeah, so there's this hot and cold problem that always gets discussed and there's no escaping this problem. And I think one has to embrace that things can be cold, because as you correctly point out, you might try to pre-warm a function or something like that, but then, any concurrent one is going to be also instantiated from scratch every time. So the solution that actually scales really well is make your cold instantiations really fast.
- How can you do that? - This is possible. So, the way that we do it on our platform is all your entry points into your code, all your JS files, for example, become discreet functions. So, for example, you may have a directory called /api and you say "Hey, I'm going to create /api/users.js" So, you can very easily tell our system, "Hey, this is a function. It uses Node.js, or it uses this Typescript, or use as Go." And we built only that as a function,
Video Details
- Duration
- 6:54
- Published
- April 11, 2019
- Channel
- Prismic
- Language
- ENGLISH
- Views
- 2,782
- Likes
- 80
Related Videos

How to use Next.js with other frameworks and tools ft Tim Neutkens | Prismic
Prismic
Interviewed: Tim Neutkens

Next js 15 with Jimmy Lai and Tim Neutkens
Software Engineering Daily
Interviewed: Tim Neutkens

How to build APIs with Next.js? ft Tim Neutkens | Prismic
Prismic
Interviewed: Tim Neutkens

16. Pioneers. with Tim Neutkens Next.js lead and co-author
Catch the Tornado
Interviewed: Tim Neutkens