Channel
Interviewed Person
v0
In this video CJ talks about MCP. What it is, why we need it, how to add tools to LLM clients and how to create and debug our own tools. š„ Be the ~14,700th person to join our super tasty newsletter https://bit.ly/syntax_snackpack 00:00 Intro 00:30 Everything is Inputs and Outputs 01:55 How LLMs are Created 03:06 LLM Context In Commercial Services 04:29 LLMs are Static 04:54 List of Tools as Context 05:17 LLM Tool Calls 07:42 The LLM Loop 08:27 What is MCP? 09:40 Weather MCP Server Example 12:12 Building MCP Servers with SDKs 13:51 Weather MCP Server Code Walkthrough 17:45 Local MCP Servers with stdio 20:34 Remote MCP Servers 24:34 Debugging and Inspecting MCP Servers 29:03 Awesome MCP Servers 29:36 MCP Recap 29:56 MCP Servers in Cursor 30:23 context7 MCP Server 33:18 MCP Servers In VS Code Github Co-Pilot (FREE) 34:47 MCP Server Setup in Gemini CLI (FREE) 36:25 Web Scraping and Undocumented APIs in MCP Servers 38:01 MCP Client Support 38:51 Sentry MCP Server 42:18 Thanks! Model Context Protocl | https://modelcontextprotocol.io/ MCP TypeScript SDK | https://github.com/modelcontextprotocol/typescript-sdk Currency Conversion MCP Server | https://github.com/wesbos/currency-conversion-mcp Claude MCP with NVM | https://gist.github.com/cognivator/ceca5a68d696b0575962ff3d18c31aa5 MCP Debugger / Inspector | https://github.com/modelcontextprotocol/inspector Awesome MCP Servers | https://github.com/punkpeye/awesome-mcp-servers context7 MCP Server | https://context7.com/ Sentry MCP Server | https://docs.sentry.io/product/sentry-mcp/ MCP Client Feature Support Matrix | https://modelcontextprotocol.io/clients#feature-support-matrix ------------------------------------------------------------------------------ Hit us up on Socials! Syntax: https://x.com/syntaxfm Scott: https://x.com/stolinski Wes: https://x.com/wesbos CJ: https://x.com/CodingGarden Randy: https://www.youtube.com/@randyrektor http://www.syntax.fm Brought to you by Sentry.io #webdevelopment #webdeveloper #javascript #syntax #syntaxfm #webdev

Syntax
Interviewed: v0
In this video, we're talking all about MC key or model context protocol. Talk about what it is, why it's needed, how to build your own servers, how to integrate with existing ones, how to debug them, and everything in between. That sounds good to you, let's dive in. My name is CJ. Welcome to Syntax. [Music] Now, before we talk about MCP, let's take a step back and just talk about how LLMs work. Now, if you're already familiar with all of this stuff, go ahead and skip ahead. You can see the timestamps in the description. But for
all of this, including LLMs and MCP, we can think about things in terms of input, some process or blackbox, and some output. And the world of programming is like this as well. And a simple example of this is an AD function. So add we can think of as our black box. Our inputs might be two numbers. So let's say we pass two and two into this add black box. And then the output of this would be four. Now add is predefined, right? we we know how to add two numbers together. It's mathematical. We can basically take any two numbers and add them together. There's a very specific algorithm for
doing that. We can also think of a black box that has a more interesting process. So for example, a text predictor. So let's say we have this black box called text predictor. It takes input which is text and spits out text as output. And one example might be if we pass in Mary had a little, we might expect that the text predictor responds with lamb. uh because Mary had a little lamb is pretty common uh in the English language. Now, what actually happens inside of this text predictor, we don't know. Um we
could come up with ways of creating a text predictor that isn't an LLM. We could come up with like a list of word pairings and say like 80% of the time when the words Mary and little appear in a sentence, lamb is likely the next word to occur. Like we could come up with our own algorithm for that. But LLMs are basically this with a much more complex process to come up with them. And it's not a specific algorithm. So the way we create an LLM is also through this input output process. So in this case the blackbox is training. The input is going to be some training data and the output
is going to be the LLM itself. Now this training data in a lot of cases might be question answer pairs. So it might be a million different sets of questions and answers and we build out a neural network based on those questions and answers. And we built out this large language model that given some question will output a seemingly correct answer based on all of the data that it was trained on. But it's interesting to see that even the creation of an LLM is an input output process. Now once we have
an LLM, it works exactly that way. So given some user input, it spits out some output and it actually is a text prediction model. So, as much as we might think that LLMs are smart or doing more than than what they actually are doing, for the most part, they're just predicting the next piece of text, the next token in any given sequence. And it actually turns out that if your training data is good enough and large enough and general enough, you actually can get an LLM that can work in most cases and seem
like it's actually pretty smart to be able to uh answer questions or or give facts about certain things. Now, when you're interacting with an LLM in a tool like ChatGBT or Cloud Desktop or even like cursor ID or VS Code with GitHub Copilot, there's actually a lot more that goes into this user input that is eventually passed to that LLM. And it looks kind of like this. So, you typically have some kind of system prompt that you don't have access to, but the owners of this LLM service, whether it's OpenAI or Google or Anthropic, they've come up with some kind of system prompt that makes sure
that the output it generates aligns with maybe their ideals. Maybe it adds some safety guards. Maybe it has a specific personality. And so this system prompt is going to to describe that. If you use a tool like chat GBT or cloud desktop, you know that sometimes it lets you save preferences like I want the LLM to always output the most succinct answer always or I don't want you to always agree with me. You should push back. So with tools like chat GBT and and cloud desktop, you add those in your user preferences and then those are sent along to the LLM in every single message