Understanding how MCP discovers which tool to use
Model Context Protocol (MCP) represents a significant advancement in how we connect AI language models with external data sources and tools. This post is about how MCP figures out which tool to use
The information on this blog is true as of the publish date. I will update it when I get time.
Introduction to MCP
MCP (Model Context Protocol) is an open protocol for connecting LLMs to external tools and data sources. Anthropic introduced it in late 2024.
The key idea: when you give Claude access to an MCP server, it dynamically discovers available tools and decides which ones to use.
This post covers MCP server components, how they work together, and how Claude picks the right tool for your query. I’m assuming you’ve encountered MCP before—if not, start with the documentation.
Examples use Claude, but this applies to any LLM supporting MCP.
Core Components
MCP is a client-server architecture. A host (Claude Desktop, an IDE, any AI tool) connects to multiple MCP servers. Each server handles one capability—filesystem access, web search, database queries, whatever.
The pieces:
- Host — Claude Desktop or your AI tool
- Client — The protocol handler (usually transparent)
- Server — Code you write to expose tools and resources
- Data sources — Files, databases, APIs that servers access
The rest of this post focuses on building MCP servers—the code side.
Parts of an MCP Server
MCP servers expose three types of capabilities:
1. Resources
Resources in MCP are like GET endpoints in a REST API. They provide data to LLMs but shouldn’t perform significant computation or have side effects. Resources are identified by URIs and can be static or dynamic.
Example:
| |
2. Tools
Tools are how LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects. They’re similar to POST endpoints in a REST API.
Example:
| |
Tools are defined with the @mcp.tool() decorator, which uses Python type hints and docstrings to automatically generate tool definitions, making them self-documenting and ensuring type safety.
3. Prompts
Prompts are reusable templates that help LLMs interact with your server effectively. Think of these as pre-defined conversation starters or workflows.
Example:
| |
4. Context Object
The Context object gives your tools and resources access to MCP capabilities:
| |
The Context object provides methods for reporting progress, logging information, and accessing other resources, making it easier to build complex tools.
5.Lifespan Management
For more complex applications, you might need to manage application lifecycle with type-safe context:
| |
So, How do they work with each other?
MCP Communication Flow
Here’s what happens when you ask Claude a question:
- You ask Claude something
- Claude connects to an MCP server and asks “what can you do?”
- The server responds with a list of tools and resources
- Claude picks a relevant tool or resource
- The server fetches data from its sources (APIs, databases, files, whatever)
- The server sends the result back to Claude
- Claude uses that data to answer your question
It’s simple because each part has one job.
This flow creates a seamless experience where users can interact with their data and external services through natural language conversations with the LLM.
We have talked enough about how MCP selects the best tool for the job but how does MCP client know which is the best tool?. Let’s look into it now.
MCP Tool Discovery and Selection
MCP is built on JSON-RPC 2.0, which gives Claude a standard way to ask servers “what tools do you have?” Each server responds with metadata: tool names, descriptions, parameter types, and documentation.
Claude then matches tools to queries:
- Parse the query — What does the user actually need?
- List available tools — What can the MCP servers do?
- Score matches — Which tool best fits the intent?
- Extract parameters — What values from the query become parameters?
- Call the tool — If confidence is high enough, invoke it
This happens automatically. You ask a question, Claude picks a tool, and you get an answer.
How Claude Actually Picks Tools
When Claude sees a query, it’s reading the documentation of each available tool and deciding which one fits.
If you ask “What’s the weather in Bangalore?”, Claude:
- Reads the description of the weather-checking tool
- Extracts “Bangalore” as the city parameter
- Calls the tool with that parameter
- Returns the result
For complex queries, Claude might chain tools together. But it doesn’t need explicit instructions—it reads the tool documentation like a human developer reading an API reference.
This is more flexible than traditional APIs where you hardcode specific function calls. Claude adapts to whatever tools are available.
That’s the flow. MCP isn’t magic—it’s just a way for Claude to ask “what can you do?” and then pick the right tool for the job.
Resources
Posts in this series:
- Work in Progress