Vercel made a Model (v0 in Cursor)

2 months agoMay 23, 2025
6:19
13,707 views
243 likes
B

Better Stack

Channel

Interviewed Person

v0

Description

Vercel’s just dropped v0's model v0-1.0-md, and it’s a awesome. This means we can use v0 in Cursor, so lets see how. With a 128 k-token context window, built-in auto-fix, quick-edit streaming and deep knowledge of modern stacks like Next.js, v0 can crank out anything from single React components to full SaaS dashboards (and even 3-D games) in a single prompt. In this video I set up v0 in Cursor and compare what we get to v0.dev 🔗 Relevant Links API Docs: https://vercel.com/docs/v0/api Add to Cursor: https://vercel.com/docs/v0/cursor ❤️ More about us Radically better observability stack: https://betterstack.com/ Written tutorials: https://betterstack.com/community/ Example projects: https://github.com/BetterStackHQ 📱 Socials Twitter: https://twitter.com/betterstackhq Instagram: https://www.instagram.com/betterstackhq/ TikTok: https://www.tiktok.com/@betterstack LinkedIn: https://www.linkedin.com/company/betterstack 📌 Chapters: 0:00 Intro 0:43 Model Demo 1:22 Model Details 2:23 Add to Cursor 3:25 Using v0 in Cursor 5:22 Compared to v0.dev

Transcript

Group by:

so Versel releasing their own models now starting with V 0-1.0-m just rolls right off the tongue it seems like they're going to be releasing upgraded versions of this as well as soon as next week alongside some benchmarks now if you've used VZ before you know it's particularly strong at web development and of course this being Vel it's very good at Nex.js it can generate you anything from simple components all the way up to full-on apps and 3JS games but it seems under the hood to power this they're not just using custom prompts and RA pipelines on top of an existing AR provider they're actually building out their own internal models

which you can now use via the API so you can add this to your own applications or even add it to cursor and codeex to get it to do the coding for you after you hit that subscribe button to stay up to date with AI news let's take a look at the vzero model if you want to quickly test what the output of this model looks like you can actually try it out on the Visel AI playground here you can see I've got the model selected and I'll say write a modern to-do app using Shadien components and what you'll see here is we're generating the thinking tokens first so we will be able to see its thought process and then after a while it's going to go ahead and generate the code for me and obviously this needs to

go through the setup instructions and also generate the various code blocks for the components it's not going to do what Vzero does where it actually hosts the application somewhere and is able to run it that's something you'd have to build out yourself as this is just the base model that powers V0 but once that's all finished I can confirm by scrolling up this that this looks pretty similar to all of the code that I've generated in VZ before and it all looks like very good Nex.js and Shaden code on the API documentation you can find out a bit more about this model as you can see it is multimodal so it will support both text and image input it has autofix so it identifies and corrects common coding

issues during generation quickedit means it streams inline edits as they're available and then the framework aware completions here is what this really excels at as it's been evaluated on modern stacks like Nex.js and Vel which makes it really good at web development in terms of the API itself as you can see it is OpenAI compatible so it can be used with any tool or SDK that supports OpenAI's API format we see a lot of models going that way it is super cool to see that they're sort of agreeing on a standard here in terms of usage limits as you can see it has a max context window size of 128,000 tokens and a max

output context size of 32,000 tokens now this is in beta at the moment so sadly it does have a limit of 200 messages per day but I imagine at some point that will be lifted and it will just be done on usage based billing in terms of using the API as you can see this is the endpoint that you're going to need to add this into your applications or if you're using the AI SDK they've obviously made that super simple too so let's go ahead and add this to cursor and see how it performs there the first step is to get your API key which you can do on vzero.dev just go into your settings API keys and create a new key now at the moment you do need to be a

premium or team plan user with usagebased billing turned on and then you can go ahead and generate your API key here once you have your API keys head over to cursor and go into the model settings now this is where things are going to get a little bit hacky and I hope they find a better way to do this in the future when this is out of beta what we're going to be doing is overriding the OpenAI API and also the API key so you go to the OpenAI API key section paste in that V 0ero API key that we just had and then also override the base URL with the one from the documentation over here i'll leave this linked in the description down below you

can hit verify to make sure that this is all working and you can connect to the VZ API but what's actually going to happen here is any OpenAI model we use is actually going to be using the V01 instead as it's being routed through the V0 API what this does mean is you can't use the OpenAI models going forward you have to come and turn this setting off so I do hope they find a better way to actually add the VZero model into Cursor itself in the future which I'm sure they will this is just a beta once you've done that in the settings all you need to do in cursor is go into the agent mode make sure you select an open AI model and to test it's actually working

you can ask who are you and hopefully this replies that it's V 0 as you can see I'm Vzero Versell's AI powered assistant i help building debugging and optimizing apps using Vel's conventions so let's try build an app with this now that we know it's working then I'm going to go ahead and give this the same prompt that I actually gave vzero.dev earlier to create a modern to-do application using shaden components i'm not going to set up any next.js application here i'm hoping that we'll do that with commands here we go we're generating a command i'll go ahead and click run on this one so it creates the nextjs application as we can see in the

13 segments (grouped from 217 original)1653 words~8 min readGrouped by 30s intervals

Video Details

Duration
6:19
Published
May 23, 2025
Channel
Better Stack
Language
ENGLISH
Views
13,707
Likes
243

Related Videos