Hands on with the Vercel AI SDK 3.1
Vercel
Channel
Interviewed Person
Nico Albanese
Description
Learn how you can use the Vercel AI SDK as your Typescript Framework for building AI applications. 0:00 – Introduction 0:28 – Generate Text 1:31 – Stream Text 2:18 – Generate Object 3:15 – Stream Object 3:55 – Tools 6:05 – Building a chatbot with AI SDK UI 7:31 – Generative UI chatbot with AI SDK RSC 12:22 – Conclusion ◆ Repo with examples: https://github.com/nicoalbanese/ai-sdk-fundamentals ◆ Docs: https://sdk.vercel.ai/docs ◆ Blog: https://vercel.com/blog/vercel-ai-sdk-3-1-modelfusion-joins-the-team ◆ Demo: https://chat.vercel.ai/ ◆ v0: https://v0.dev
Transcript
the versel AIS SDK is the typescript toolkit for building AI applications in this video we're going to build a few applications to understand how it works we're going to start by building a few terminal programs to understand AIS SDK core then we're going to build a chatbot with AIS SDK UI and then we're going to go beyond text streaming react components from the server to the client with AIS SDK RSC let's get started to start we're going to create a typescript file and we're going to define a main function within that main function we're
going to call generate text to use generate text and any AI SDK function we're first going to have to provide a model let's import the open aai provider and then specify the exact model we want to use in this case GPT 40 now AIS SDK core has been designed to make changing models as easy as changing two lines of code so let's see how we can change from GPT 40 to Gemini Pro we first import the Google provider
and then we specify the model we want to use let's go back to GPT 40 for this example now we need to specify our prompt we're going to ask GPT to tell us a joke finally we need to log out the text generation to the console let's run the script in the terminal and see what happens and the models responded sure here's a lighthearted joke for you why don't skeletons fight each other they don't have the gut that's pretty good
but did you notice there was a bit of a delay between when we ran the script and when the model returned a response we can solve this with streaming streaming allows us to send the model's response incrementally as it's being generated let's update our example to use streaming so all we need to do is change the generate text function to the stream text function and then we need to handle the streaming result we're going to use an asynchronous for Loop to iterate over the resulting text stream
and then log it out to the console so if we run this now in the terminal let's see what happens and just like that our joke is streamed like chat GPT typewriter style how cool we now know how to generate and stream text with a large language model but notice that the model's response doesn't contain just a joke wouldn't it be nice if we could return our joke in a structured format well with the help of Zod a scheme of validation Library we can do just that we're going to go back to generating rather than streaming
we've got our generate text example to force the model to return a structured object we're first going to change generate text to generate object and then we're going to import Zod and then Define a Zod schema for our joke our joke is going to have two keys setup which is a string and punchline which is also a string we can also optionally describe each of our keys to ensure the model has the appropriate context to give us a great generation and then finally we log out the resulting object to the console let's run this script and
see what happens so now we have our joke but in a structured format let's see how it did setup why don't scientists trust Adams punchline because they make up everything again not too bad just like in our generate text example you may have noticed that there was a bit of a delay between when we ran the script and when the model returned a response well we can again fix that using streaming let's update our example to use stream object first we're going to change the function from generate object to stream
object and then to handle the streaming response we're going to use an asynchronous for Loop to iterate over the partial object stream and return the partial object to the console let's run this in the terminal and see what happens awesome our structur joke is now streamed directly to the console as you can see AIS SDK core makes it simple to call any large language model but while llms are powerful they're also known for hallucinating that is stuff up we can
Video Details
- Duration
- 13:04
- Published
- May 21, 2024
- Channel
- Vercel
- Language
- ENGLISH
- Views
- 49,326
- Likes
- 1,742
Related Videos

Zero to $10 Million with React Native + Next.js - Fernando Rojo - (Next.js Conf 2021)
Vercel
Interviewed: Fernando Rojo

Fernando Rojo: Is React Native + Next.js production-ready?
Vercel
Interviewed: Fernando Rojo

Tim Neutkens: Next.js
Vercel
Interviewed: Tim Neutkens

Vercel Ship 2025: The no-nonsense approach to AI agent development (Malte Ubl)
Vercel
Interviewed: Malte Ubl