Important

The information on this blog is true as of the publish date. I will update it when I get time.

Introduction to MCP

MCP (Model Context Protocol) is an open protocol for connecting LLMs to external tools and data sources. Anthropic introduced it in late 2024.

The key idea: when you give Claude access to an MCP server, it dynamically discovers available tools and decides which ones to use.

This post covers MCP server components, how they work together, and how Claude picks the right tool for your query. I’m assuming you’ve encountered MCP before—if not, start with the documentation.

Examples use Claude, but this applies to any LLM supporting MCP.

Core Components

MCP is a client-server architecture. A host (Claude Desktop, an IDE, any AI tool) connects to multiple MCP servers. Each server handles one capability—filesystem access, web search, database queries, whatever.

The pieces:

  1. Host — Claude Desktop or your AI tool
  2. Client — The protocol handler (usually transparent)
  3. Server — Code you write to expose tools and resources
  4. Data sources — Files, databases, APIs that servers access

The rest of this post focuses on building MCP servers—the code side.

Parts of an MCP Server

MCP servers expose three types of capabilities:

1. Resources

Resources in MCP are like GET endpoints in a REST API. They provide data to LLMs but shouldn’t perform significant computation or have side effects. Resources are identified by URIs and can be static or dynamic.

Example:

1
2
3
4
5
6
7
8
9
@mcp.resource("config://app")
def get_config() -> str:
    """Static configuration data"""
    return "App configuration here"

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
    """Dynamic user data"""
    return f"Profile data for user {user_id}"

2. Tools

Tools are how LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects. They’re similar to POST endpoints in a REST API.

Example:

1
2
3
4
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
    """Calculate BMI given weight in kg and height in meters"""
    return weight_kg / (height_m**2)

Tools are defined with the @mcp.tool() decorator, which uses Python type hints and docstrings to automatically generate tool definitions, making them self-documenting and ensuring type safety.

3. Prompts

Prompts are reusable templates that help LLMs interact with your server effectively. Think of these as pre-defined conversation starters or workflows.

Example:

1
2
3
@mcp.prompt()
def review_code(code: str) -> str:
    return f"Please review this code:\n\n{code}"

4. Context Object

The Context object gives your tools and resources access to MCP capabilities:

1
2
3
4
5
6
7
8
@mcp.tool()
async def long_task(files: list[str], ctx: Context) -> str:
    """Process multiple files with progress tracking"""
    for i, file in enumerate(files):
        ctx.info(f"Processing {file}")
        await ctx.report_progress(i, len(files))
        data, mime_type = await ctx.read_resource(f"file://{file}")
    return "Processing complete"

The Context object provides methods for reporting progress, logging information, and accessing other resources, making it easier to build complex tools.

5.Lifespan Management

For more complex applications, you might need to manage application lifecycle with type-safe context:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
    """Manage application lifecycle with type-safe context"""
    # Initialize on startup
    db = await Database.connect()
    try:
        yield AppContext(db=db)
    finally:
        # Cleanup on shutdown
        await db.disconnect()

# Pass lifespan to server
mcp = FastMCP("My App", lifespan=app_lifespan)

So, How do they work with each other?

MCP Communication Flow

Here’s what happens when you ask Claude a question:

  1. You ask Claude something
  2. Claude connects to an MCP server and asks “what can you do?”
  3. The server responds with a list of tools and resources
  4. Claude picks a relevant tool or resource
  5. The server fetches data from its sources (APIs, databases, files, whatever)
  6. The server sends the result back to Claude
  7. Claude uses that data to answer your question

It’s simple because each part has one job.

sequenceDiagram participant U as User participant H as MCP Host participant C as MCP Client participant S as MCP Server participant E as External Service U->>H: Ask a question H->>C: Process request C->>S: Initialize connection S-->>C: Acknowledge connection C->>S: Discover capabilities S-->>C: Return available tools/resources C->>S: Request resource or invoke tool S->>E: Access external service E-->>S: Return data S-->>C: Return formatted response C-->>H: Format for AI consumption H-->>U: Present answer to user

This flow creates a seamless experience where users can interact with their data and external services through natural language conversations with the LLM.

We have talked enough about how MCP selects the best tool for the job but how does MCP client know which is the best tool?. Let’s look into it now.

MCP Tool Discovery and Selection

MCP is built on JSON-RPC 2.0, which gives Claude a standard way to ask servers “what tools do you have?” Each server responds with metadata: tool names, descriptions, parameter types, and documentation.

Claude then matches tools to queries:

  1. Parse the query — What does the user actually need?
  2. List available tools — What can the MCP servers do?
  3. Score matches — Which tool best fits the intent?
  4. Extract parameters — What values from the query become parameters?
  5. Call the tool — If confidence is high enough, invoke it

This happens automatically. You ask a question, Claude picks a tool, and you get an answer.

How Claude Actually Picks Tools

When Claude sees a query, it’s reading the documentation of each available tool and deciding which one fits.

If you ask “What’s the weather in Bangalore?”, Claude:

  • Reads the description of the weather-checking tool
  • Extracts “Bangalore” as the city parameter
  • Calls the tool with that parameter
  • Returns the result

For complex queries, Claude might chain tools together. But it doesn’t need explicit instructions—it reads the tool documentation like a human developer reading an API reference.

This is more flexible than traditional APIs where you hardcode specific function calls. Claude adapts to whatever tools are available.

That’s the flow. MCP isn’t magic—it’s just a way for Claude to ask “what can you do?” and then pick the right tool for the job.

Resources