Last night I had the pleasure of speaking to two simultaneous audiences: Nashua Cloud .NET & DevBoston community tech groups. The talk was on Model Context Protocol (MCP) which, in a nutshell, is the rising star for answering the following question: What’s the best way to allow my LLM to call my code in a standard way?
There is a lot in that statement, so let me elaborate.
First, what do you mean by “the best way to allow my LLM to call my code” — why is the LLM calling my code at all? Don’t we invoke the LLM via its API, not the other way around? Good question, but LLMs can actually invoke your code. Because this is how LLMs are empowered to do more as AI Agents. Think about an AI Agent as an LLM + a Goal (prompts) + Tools (code, such as provided by MCP servers). The LLM uses the totality of the prompt (system prompt + user prompt + RAG data + any other context channeled in via prompt) to understand the goal you’ve given it then it figures out which tools to call to get that done.
In the simple Azure AI Agent I presented, its goal is to deliver an HTML snippet that follows HTML Accessibility best practices in linking to a logo it tracks down for us. One of the tools is web search to find the link to the logo. Another tool validates that the proposed link to the logo actually resolves to a legit image. And another tool could have been to create a text description of the image, but I made the design choice to leave that up to the Agent’s LLM since it was multimodel. (My older version had a separate tool for this that used a different LLM than the one driving the agent. This was an LLM with vision capabilities – which is still a reasonable idea here for multiple reasons, but kept it simple here.)
Second, what do you mean by “in a standard way” – aren’t all LLMs different? It is actually the differences between LLMs that drives the benefits of a standard way. It has been possible for a while to allow your LLM to call out to tools, but there were many ways to do this. Now doing so according to a cross-vendor agreed-upon standard, which MCP represents, lowers the bar for creating reusable and independently testable tools. And marketplaces!
Remember many challenges remain ahead. There are a few others in the deck, but here are two:
First screenshot reminds that there are limits to how many MCP tools an LLM (or host) can juggle; here, GitHub Copilot currently is capping at 128 tools, but you can get there quickly!
Second screenshot reminds that these are complex operational systems. This “major outage” (using Anthropic’s terminology) was shortly before this talk so complicated my planned preparation timel. But it recovered before the talk timeslot. Phew.


Connect with Bill and Boston Azure AI
Links from the talk
- Assorted Cranking AI resources ➞ https://github.com/crankingai
- Code for the Agent ➞ https://github.com/crankingai/logo-agent
- Code for the Logo Validator MCP tool ➞ https://github.com/crankingai/logo-validator-mcp
- Code for the Brave Web Search MCP tool ➞ https://github.com/crankingai/brave-search-mcp
- Images I used in the example ➞ https://github.com/crankingai/bad-images (https://raw.githubusercontent.com/crankingai/bad-images/refs/heads/main/JPEG_example_flower-jpg.png)
Anthropic status page ➞ https://status.anthropic.com/ (see screenshot above).
Model Context Protocol (MCP) Resources
Standards & Cross-vendor Cooperation
- Model Context Protocol Introduction – Official introduction to MCP, described as “a USB-C port for AI applications” that standardizes how applications provide context to LLMs.
- Anthropic’s Model Context Protocol Announcement – The original announcement from Anthropic open-sourcing MCP as a universal standard for connecting AI systems with data sources.
- Official C# SDK for Model Context Protocol – The official C# SDK maintained in collaboration with Microsoft, available as a NuGet package: ModelContextProtocol.
- Microsoft’s Partnership with Anthropic for C# SDK – Microsoft’s announcement of their collaboration with Anthropic to create the official C# SDK for MCP.
- https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/ – A2A – Agent to Agent protocol
- https://techcrunch.com/2025/05/07/microsoft-adopts-googles-standard-for-linking-up-ai-agents/ – A2A from Google, being adopted by Microsoft
- Nanda Lab at MIT Media Lab – Research lab working on projects related to AI systems and tools at scale – a highly distributed AI future
SDKs & Samples
- Integrating MCP Tools with Semantic Kernel – A step-by-step guide for integrating MCP tools with Microsoft’s Semantic Kernel framework.
- Azure AI Agent with Semantic Kernel (C#) – Microsoft documentation on building AI agents with Azure and Semantic Kernel using C#.
- MCP Examples – Official examples of using the Model Context Protocol in various scenarios.
MCP Servers & Implementations
Popular MCP Servers
- GitHub MCP Server – GitHub’s official MCP server that provides seamless integration with GitHub APIs for automating workflows, extracting data, and building AI-powered tools. In case you’d like to create a Personal Access Token to allow your GitHub MCP tools to access github.com on your behalf ➞ https://github.com/settings/personal-access-tokens
- Playwright MCP Server – Microsoft’s MCP server that provides browser automation capabilities using Playwright, enabling LLMs to interact with web pages through structured accessibility snapshots.
- MCP Servers Repository – Collection of official reference implementations of MCP servers.
- Popular MCP Servers Directory – Curated list of popular MCP server implementations.
MCP Inspector Tool ➞ Check this out for sure
- MCP Inspector – Interactive debugging tool for testing and inspecting MCP servers.
- MCP Inspector on GitHub – Source code repository for the MCP Inspector tool.