Channel
Interviewed Person
Conferences
Opening Keynote with Guillermo Rauch (Vercel), Sam Selikoff (Vercel), Jimmy Lai (Vercel) Learn more about Next.js 16 here: https://nextjs.org/blog/next-16 Timestamps: 0:00 – Turbopack for Development, Filesystem Caching 14:16 – Instant navigations and Cache Components 35:56 – Deployment Adapters and Next 16
Vercel
Interviewed: Conferences
[Music]
Please welcome to the stage Versel founder and CEO Guiermo Roush. Good morning. Good morning everybody. Thank you GM and welcome back. Hello. Today we're having the sixth annual Nex.js JS Conf. I'm pumped to spend a day with everyone here in this
room. And a special shout out to the thousands of people watching around the world, especially our watch parties in London and Berlin. Thank you for spending your time with us today. Last year when we met, we're a year and a half into this app router era of Nex.js. And at that time, our framework was being downloaded 7 million times per week. And over the course of building the app router, we made a lot of bets.
We'll be talking a lot about bets today. When it first launched, it had the earliest mainstream implementation of React server components. It also supported nested layouts. It used the file system as the API for its routing features. and it kicked off the development of TurboVac to ensure that this new server first architecture would scale to apps of all sizes.
Today, NexJS is being downloaded 13 million times per week, nearly double. Thank you. Nearly double the adoption in just one year, especially with AI math, right? That tells us that many of our bets are paying off and we're so grateful to the community who has helped us get the app router to where it is today. Thank you so much. But how we're writing software has changed. When we started the app router,
GPT3 had just come out. ChatGpt or Chat didn't exist. GitHub Copilot was in beta. In a very short amount of time, we've gone from writing code by hand to getting suggestions to having models write code to now having agents that author, test, execute, and ship entire features. And what we've seen with this LLMs is
that they really push us in terms of the design of our own APIs. An LLM's context window is even shorter than a human's attention span. So if we make an API that's confusing for anyone in this room, the LLMs stand no chance. The easier we make the developer experience for humans, the better we make it for agents. So with all of this in mind, let's take a closer look at our bets and see which
ones paid off and which ones we're revisiting. So the first one we're really happy with, I'm really happy with is React Server Components. When we decided to go allin on RSC's, we're investing in a novel unproven architecture. Fast forward to today and we're seeing RSC's gain adoption in popular ecosystem projects like VIT, React Router, Redwood