What Is Model Context Protocol (MCP) and How It Turns AI Into a Universal Interface?

This article explains the Model Context Protocol (MCP) – an open, consensus‑based standard that lets large language models seamlessly interact with external tools and data, describes its architecture, why it’s needed, how models choose tools, and provides a step‑by‑step Python server implementation with code examples.

Open Source Tech Hub
Open Source Tech Hub
Open Source Tech Hub
What Is Model Context Protocol (MCP) and How It Turns AI Into a Universal Interface?

Overview of MCP

Model Context Protocol (MCP) is an open, universal protocol introduced by Anthropic that standardises how large language models (LLMs) communicate with external tools, databases and services. It provides a single, consistent interface – analogous to a USB‑C connector – so that models can access diverse resources without writing custom integrations for each.

Why MCP Is Needed

Without MCP, developers must manually stitch together fragmented agent code, each handling a specific tool (search, email, code review, etc.). This quickly becomes unmanageable as the number of required capabilities grows. MCP replaces that fragmentation with a common interface, enabling tool reuse, improving reliability and keeping data processing local for security.

Core Architecture

MCP Host : the AI application (e.g., a chatbot or AI‑powered IDE) that initiates requests.

MCP Client : runs inside the host and maintains a 1:1 connection to an MCP Server.

MCP Server : provides tool definitions, resources and executes tool calls.

Local Resources : files, databases or other assets on the host machine that the server can safely read.

Remote Resources : APIs or external services reachable via the server.

How Models Choose Tools

The client sends a system prompt that lists all available tools with structured descriptions. The LLM analyses the user query, decides which tool(s) to invoke, and returns a JSON‑formatted tool call. The server executes the tool, returns the result, and the LLM incorporates the result into a final natural‑language response.

async def start(self):
    for server in self.servers:
        await server.initialize()
    all_tools = []
    for server in self.servers:
        tools = await server.list_tools()
        all_tools.extend(tools)
    tools_description = "
".join([tool.format_for_llm() for tool in all_tools])
    system_message = (
        "You are a helpful assistant with access to these tools:

"
        f"{tools_description}
"
        "Choose the appropriate tool based on the user's question. If no tool is needed, reply directly.

"
        "IMPORTANT: When you need to use a tool, you must ONLY respond with the exact JSON object format below, nothing else:
"
        "{
"
        "    \"tool\": \"tool-name\",
"
        "    \"arguments\": {
"
        "        \"argument-name\": \"value\"
"
        "    }
"
        "}
"
    )
    messages = [{"role": "system", "content": system_message}]
    while True:
        messages.append({"role": "user", "content": user_input})
        llm_response = self.llm_client.get_response(messages)
        result = await self.process_llm_response(llm_response)
        if result != llm_response:
            messages.append({"role": "assistant", "content": llm_response})
            messages.append({"role": "system", "content": result})
            final_response = self.llm_client.get_response(messages)
            messages.append({"role": "assistant", "content": final_response})
        else:
            messages.append({"role": "assistant", "content": llm_response})

Tool Definition Example

class Tool:
    """Represents a tool with its properties and formatting."""
    def __init__(self, name: str, description: str, input_schema: dict[str, any]):
        self.name = name
        self.description = description
        self.input_schema = input_schema

    def format_for_llm(self) -> str:
        """Format tool information for LLM."""
        args_desc = []
        if "properties" in self.input_schema:
            for param_name, param_info in self.input_schema["properties"].items():
                arg_desc = f"- {param_name}: {param_info.get('description', 'No description')}"
                if param_name in self.input_schema.get("required", []):
                    arg_desc += " (required)"
                args_desc.append(arg_desc)
        return f"Tool: {self.name}
Description: {self.description}
Arguments:
{chr(10).join(args_desc)}"

Creating a Minimal MCP Server (Python)

The example below counts and lists .txt files on the user's desktop. It demonstrates the @mcp.tool() decorator, environment setup and server execution.

import os
from pathlib import Path
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Desktop TXT Counter")

@mcp.tool()
def count_desktop_txt_files() -> int:
    """Count the number of .txt files on the desktop."""
    username = os.getenv("USER") or os.getenv("USERNAME")
    desktop_path = Path(f"/Users/{username}/Desktop")
    txt_files = list(desktop_path.glob("*.txt"))
    return len(txt_files)

@mcp.tool()
def list_desktop_txt_files() -> str:
    """List all .txt filenames on the desktop."""
    username = os.getenv("USER") or os.getenv("USERNAME")
    desktop_path = Path(f"/Users/{username}/Desktop")
    txt_files = list(desktop_path.glob("*.txt"))
    if not txt_files:
        return "No .txt files found on desktop."
    file_list = "
".join([f"- {file.name}" for file in txt_files])
    return f"Found {len(txt_files)} .txt files on desktop:
{file_list}"

if __name__ == "__main__":
    mcp.run()

Step‑by‑Step Development Guide

Install Claude Desktop (or any MCP‑compatible client) and a Python 3.10+ environment.

Install the MCP SDK, e.g. uv add "mcp[cli]" httpx.

Create the server file as shown above.

Run the server with mcp dev txt_counter.py. The MCP inspector UI starts at http://localhost:5173.

Configure the client (Claude Desktop) to point to the server via its JSON config file.

Ask the AI to count or list desktop .txt files; the model will invoke the appropriate tool.

Debugging and Inspection

The MCP inspector visualises tool calls and responses. Official debugging guides are available at https://modelcontextprotocol.io/docs/tools/debugging and the inspector documentation at https://modelcontextprotocol.io/docs/tools/inspector .

Comparison with Traditional Function Calls

Function‑call APIs (e.g., OpenAI, Google) let a model invoke predefined functions, but they are tightly coupled to a specific provider and require separate implementations for each platform. MCP abstracts the call mechanism into a protocol, making tool definitions portable across models and providers while preserving security (local resources never leave the host).

Importance of Tool Descriptions

The model decides which tool to use solely from the structured description supplied in the system prompt. Therefore, clear tool names, docstrings and JSON schemas are essential. The MCP SDK derives these fields automatically from the decorated Python function:

@classmethod
def from_function(cls, fn: Callable, name: str | None = None, description: str | None = None, ...):
    func_name = name or fn.__name__  # function name becomes tool name
    func_doc = description or fn.__doc__ or ""  # docstring becomes description
    # additional metadata (async flag, parameter inspection) is collected here
    ...

Tool Execution and Result Feedback Loop

When the model outputs a JSON tool call, the client executes the corresponding function and feeds the result back to the model as a new system message. The loop looks like:

while True:
    messages.append({"role": "user", "content": user_input})
    llm_response = self.llm_client.get_response(messages)
    result = await self.process_llm_response(llm_response)  # executes tool if present
    if result != llm_response:
        # tool was executed – send result back for final answer
        messages.append({"role": "assistant", "content": llm_response})
        messages.append({"role": "system", "content": result})
        final_response = self.llm_client.get_response(messages)
        messages.append({"role": "assistant", "content": final_response})
    else:
        # no tool needed – return model's answer directly
        messages.append({"role": "assistant", "content": llm_response})

Conclusion

MCP provides a standardized, secure and extensible way for LLMs to interact with both local and remote resources. By decoupling tool definitions from any specific provider, it eliminates platform‑specific function‑call constraints, reduces boilerplate, and enables a growing ecosystem of reusable tools that can be accessed by any MCP‑compatible model.

LLMMCPTool Calling
Open Source Tech Hub
Written by

Open Source Tech Hub

Sharing cutting-edge internet technologies and practical AI resources.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.