Channel
Interviewed Person
Theo Browne (t3dotgg)
The race to make the best AI model for programming just got even more insane with Vercel dropping their first model... Use code CMON-THEO for 1 month of T3 Chat for $1: https://soydev.link/chat (Only valid for first time customers) SOURCE https://x.com/v0/status/1925375968077914268 Want to sponsor a video? Learn more here: https://soydev.link/sponsor-me Check out my Twitch, Twitter, Discord more at https://t3.gg S/O Ph4se0n3 for the awesome edit 🙏

Theo - t3․gg
Interviewed: Theo Browne (t3dotgg)
The race to make the best model for coding with AI is insane. And there's a new entrance that I did not expect at all. V0 by Versel just dropped their own model. I really did not expect this. There isn't much in terms of detail for what that means. Like is this a custom GPT? Is this something they trained from scratch? It's really hard to know for sure. What I do know for sure is that they have a ton of data on inputs, outputs of code of endless source code that they can scrape and most
importantly the feedback from people giving a thumbs up, thumbs down, retrying and whatnot. And it seems like they decided to use that to refine a model to make Vzero as good as possible. The result is that Vzero consistently makes some of the best looking UIs of any AI platform I've used. But it seems like the magic behind a lot of that is the model that they built that is now able to be used via API. This dropped last night, which was the 21st, relatively late at almost 8:00 p.m. Pacific time, which is the region they're based out of. I know that
because I live relatively close to the office. Why would they drop this out of nowhere so late at night? The reason is because today Claude 4 dropped. You're probably watching this video the day or two after, maybe later. I don't know when you watch the video. I barely even know when it's going to come out. The point being, this model drop seems to have been timed as well as they could because if they had done it after Claude 4, it wouldn't have had any splash at all. Now, it still can. And there's a lot that we can learn from this model. I'm very excited to dive in with all of you, but as you guys know, Forcel and I
broke up quite a while ago, so I'm not being paid a scent by them for this video. If anything, it's probably going to cost a lot of money because the price of the model is quite expensive. So, yeah, someone has to cover the bill. So, quick word from today's sponsor and then we dive in. Obviously enough, I just went and set this up in T3 chat. The actual official guide when I tried to do this initially is that you use the OpenAI helper from the AI SDK because they didn't have their own provider for it yet, which
doesn't sound like that big a deal cuz like everyone uses OpenAI standards, right? This AI package that I'm using, including the SDK stuff here, that's all by Versel. So, I just thought it was kind of funny to mention that if you wanted to use this last night, you had to use the OpenAI package by Verscell instead of the Verscell package by Verscell. Anyways, after a little bit of finagling, I got it working relatively well. I just have it in my dev environment here, and we got it to start generating some front end. It wasn't the fastest thing, but it did generate a lot
of it. And I wanted to compare this to some testing I was doing earlier today to compare the different models including 04 Mini, Claude 4 Opus, Clad 4 Sonnet, etc. I will say that by far the best looking one was Cloud 4 Sonnet. So if we go to the Tailwind playground, we can just throw this in quick. I have to yink this guy. Switch over to Tailwind 34. Paste that here. And now we can see what the V4 version of Claude was able
to generate for my quick fake homepage for T3 chat using Tailwind. And it came out pretty good looking, especially when you compare to the other options. Let me just grab one of the worst ones quick. 4.1's was pretty bad. This is what GPT 4.1 generated for reference. Didn't handle text color right. Just kind of not great looking. The gradient's cool, although I'm sure that's going to compress like crap in the video. There's a reason I'm using a Chromebased browser for all of this. Gradients are hard for Firefox and video. So, how did V 0ero
do? I have not looked at this yet. You guys are about to see my genuine reaction. Okay, it handles responsiveness better than all the others by quite a bit. Made these different sections testimonial section. It did a lot more like the actual length of the output is significantly longer. It was Yeah. 586 lines