Channel
Interviewed Person
Lars Grammel
In this conversation, Marc (CEO of Langfuse) and Lars Grammel (Creator of Vercel AI SDK) chat all things AI development and AI SDK. Lars shares what motivated him to start ModelFusion, how he joined Vercel to continue work on AI SDK, and how they made big iterations in AI SDK v4 and v5. Throughout this journey he faced the challenges of standardization in API design, and worked closely with the JS community. Towards the end they also touched on observability and telemetry in AI SDKs, as well as the focus on building user-friendly applications for developers. This conversation was part of the Langfuse Context series at the Langfuse Berlin Office. We will host more of these in-person conversation. Follow our event calendar here: https://langfuse.com/events 00:00 Intro 01:26 Journey to Model Fusion 06:13 Integrating Model Fusion with Vercel 08:58 The Transition to AI SDK Version 5 11:20 Building for the Future of AI Applications 15:05 Observability and Telemetry in AI SDKs 18:11 Challenges in Standardization and API Design 21:03 Future Directions 25:28 Audience Questions
I'm happy that OpenAI is now at version five. I think we all waited for several years for that and uh pretty big in a way. It is what I I expected. So the capabilities of the model have obviously gone up to the next level and then they added a few additional tweaks, but it was also not kind of like next level world like that. The moment that GPT4, for example, had where you were like, "Oh my god, what is this? It's so much better than everything I've ever seen. I think everybody who has seen the trajectory that the models have been on
over the past two years was not surprised. Clearly, it's the best model. Clearly, it's really good. I'm sure almost everyone here will be using it. Um, but it is within the realm of of what I would have expected. Yeah. Um, last take. Uh, generally I'm super excited to have last year because uh like we realized that um I mean language operates in mostly San Francisco and Berlin. In San Francisco, there are many interesting events going on with people who basically are decision makers in the space. But there
are some kind of interesting people here in Berlin as well, but many of them are unknown as for example L who is mostly like visible online but not in the physical world. So it's thanks for coming out of your home where you mostly work from um to basically share some of your knowledge here. Um L is one of the commentators of the vessel AI SDK has previously worked on model fusion exited um that project to vessel to then now lead the vessel SDK. Maybe we can start there. Um, basically how did this basically uh come to be? Why did you start working on model fusion? Um, and
how how then you did you like exit to Vessel and why? Yeah, I mean I I can go about like three years back, right? Um, even before the chat GPT moment. So I was working on automatic code transformation and migrations um with my own like little solarreneur company that I had back then and um in summer 2022 is like there was the first workshops and so forth and people started to talk about foundation models and what GPT3 can do and so forth. So I got interested in that and was monitoring the space and then the day I remember is when chach came out
right and I didn't sleep the whole night I was like okay this this is next level right and the next day I went in and I built it into to the application I had a similar thing where you could chat with AI um and I was convinced that there was a lot of potential there. Um and so I kind of stopped what I was doing at that point and explored different ideas. Um for example uh VS code plugins for code generation all the stuff you see nowadays in cursor obviously like years ago very early um I dropped that idea
was called rubber duck um and then when GPT4 came out explored a few other directions also like agents and observability um and I started developing my own little agent library um that was around um a little bit a few months after um uh lang chain became a big hit and uh so one to do something different in JavaScript because not everybody was happy with lang chain. Um, and that was kind of the precursor. I started realizing that a lot of what lang chain was doing is just reinventing the wheel
in a way. So, reinventing the for loop, reinventing workflows and and whatnot. And it almost always got in the way for the use cases that I had also as a contractor. I wanted something super simple. So, I started looking at it. What are the good ideas? What are the bad ideas? And then model fusion the core idea is essentially like three things. So I want to have something very very simple that I can use as a function that doesn't get in the way. I can use it I can focus on my prompts and then I can just generate a stream text and not
worry about it. I want to use models from all kinds of providers not just open AAI in independent way right even locally with llama CPP and so forth. So I needed to abstract that. Um and then I didn't want to reinvent the full loop. didn't want to invent a work reinvent a workflow because when I built my little agent framework I I realized that this is just it's just getting in the way at the end of the day you just want to write JavaScript or TypeScript or whatever language you want and then control exactly what you're