Channel
Interviewed Person
Conferences
How the Next.js core team is improving the AI agent experience in Next.js Get a demo today: https://vercel.com/contact/sales/demo
Vercel
Interviewed: Conferences
[Music] Good morning everyone. I'm Jude, a software engineer on the next.js team at Versel. Um, as we all know, AI agents can do a lot of stuff, right? They can read your code. They can even run your dev server for you. They can click around to the browser for you to do testing. But is there anything that the agents cannot do? Well, they can't see what
Next.js knows at the framework level. They can't know what's happening at runtime. So, they blind to what's actually happening inside your app. So, they don't see they don't see which routes are currently rendering on the page. They don't see why hydration failed. So all that rich dynamic context inside the core of the framework is invisible to them. Today that changes because for the first
time I'll show that AI can talk directly to the framework. You will see it in action. An agent fixing a hydration error. An agent editing the right file without much agentic search. and an agent migrating the entire app to Nex.js 16. All powered by one simple idea. What if AI could see what the framework
sees? So let's start with where we are today with dev tools. So developer experience has always been our northstar with Nex.js. Uh we've shipped clearer hydration errors, um more readable call stacks, uh and also this completely redesigned UI for overlays.
All of these tools are visually pleasing. They are interactive. They're built for humans staring at the screens and they work to some extent. So when you see this error overlay, you instantly know what's broken. You can click through the stack and see the component. And because of the nice improvements we did for hydration error, you can also get the exact place of the
hydration mismatch. This is great DX. But here's the shift. More and more of us are not staring at screens anymore. we're delegating to AI. So it all started with autocomplete. Then came this full AI powered ides and now some people just directly talk to terminal and see the fe uh features being built.
So this work has changed u this workflow has changed fundamentally which means all these visual tools we've built they don't work for agents well you and I can see the error overlay clearly but when you tell your AI to fix this error it has no idea what you're talking about because the AI cannot see what you are seeing
for the overlay. So the critical context, the thing you need to fix the bug is completely outside the agent's context window. It can read your source code. It can see your terminal output, but it can't see the next year's internal states which are so critical for the agent task at the time. So this creates a bizarre situation where you the human become the bottleneck. You have to manually copy