Channel
Interviewed Person
Conferences
Get a demo today: https://vercel.com/contact/sales/demo
Vercel
Interviewed: Conferences
[Music] I was saying engineer at Yep and uh we build a consumer product for helping people to shape the future of AI. Cool. Uh I'm Raph. I'm a co-founder at kernel or crazy fast browser infrastructure for your AI agent. Uh previously I was co-founder of a company clever which is a education uh company doing identity. Um, so very familiar
with bots and identity. That's great. As you can see, we are here to talk about bot ID. It's a versel product that is helping you identify bots and humans on the web. And we have both a kind user of bot ID as well as potentially a friendly or potentially opposition of bot ID depending on how you ask. So first question for Mchi is how do you balance preventing scraping and credit use with allowing legitimate automation by real users?
Uh yeah so to give a little bit of context on what we do on Yelp. Um so for consumers it's a platform where you can come and uh use any of the AI models that's been available to the world at this point we have 800 models on the platform and completely free of charge. So the users come in they submit their prompts they get two responses. uh we show them two because sometimes you want multiple perspectives, sometimes the models hallucinate or is it just out of date. Um and you don't have to pay anything but in this instead what we do
is we ask the user to give us uh feedback data on which one performs better which one do you like more and um of course being a free service uh to be able to use this sometimes $100 subscription per month kind of uh GT5 pros of the world um we have a lot of abuse vectors right people want to just come on the platform they maybe want to hook it up to an API endpoint so you don't have to pay all cursor subscription Um and that's why we integrate with bot
id the day it was launched we we integrate with the the service um and I think on this particular use case the the problem statement that we have is how do we get high quality human evaluation data uh because at the end of the day what we want to collect is um high quality human feedback on how the models do better and provide them as a post- trainining uh post- training data from the model builders of the world so they can uh improve the models that
they're building. Um I know there's like explorations on synthetic data on like how AI might be able to prov perform this kind of task. uh but I think at this current stage we don't really trust um the taste of of AI agents right so if you come to platform all you want to do is submit prompts get free feedback the the response that you pick if it's not done by a human it it's not what uh we would value um and ultimately that's the
reason we do not allow automated use on the platform today um for for the purpose of submitting prompts and picking responses makes sense We want some high quality data at the end of it. Yeah. I guess similar question to you Roth is what do you think about automating user journeys that end up hitting a capture or something like bot ID and what's your take on how we can work together on this? Yeah, I definitely think uh we're kind of at a crossroads uh kind of in the internet at large uh where we've
built up all of these systems to uh distinguish bot versus human. Uh and now we have a new class of bot that is working on behalf of a human and maybe uh is doing something uh that looks very different from your typical uh bot. And so yeah, what we what we think about on this front is what are ways we can craft uh pathways for those agents uh so that uh end websites can distinguish kind of good versus bad because I think right