How to Use Anthropic’s Model Context Protocol for Seamless LLM Integration
This article explains Anthropic’s open‑source Model Context Protocol (MCP), its client‑server architecture, resource and tool definitions, sampling workflow, and provides step‑by‑step Python examples for building a PoE2 hot‑fix fetcher and a simple chatbot that leverages MCP to connect large language models with external data sources and functions.
What is MCP?
Model Context Protocol (MCP) is an open‑source protocol from Anthropic that standardises bidirectional communication between large language models (LLMs) and external data sources or tools, similar to HTTP for AI.
Goals
Standardisation : a uniform way to connect models to diverse data sources, avoiding custom adapters.
Flexibility : models can switch tools without code changes.
Openness : any developer can implement an MCP server.
Security : built‑in permission controls keep data owners in control.
Architecture
MCP follows a client‑server model:
Host (e.g., Claude for Desktop) initiates the connection.
Client runs inside the host application, maintains a 1:1 connection with the server and handles protocol messages.
Server provides resources, tools and prompts while keeping API keys private.
Resource schema
{
"uri": "string", // unique identifier
"name": "string", // human‑readable name
"description": "string", // optional
"mimeType": "string" // optional MIME type
}Prompt schema
{
"name": "string",
"description": "string",
"arguments": [
{
"name": "string",
"description": "string",
"required": true
}
]
}Tool definition
Tools are executable functions exposed by the server. They are discovered via tools/list and invoked via tools/call.
{
"name": "string",
"description": "string",
"inputSchema": {
"type": "object",
"properties": { /* tool‑specific parameters */ }
}
}Sampling workflow
The server can request the LLM to generate a response, enabling agent‑style interactions while preserving privacy.
{
"messages": [
{
"role": "user|assistant",
"content": {
"type": "text|image",
"text": "string",
"data": "base64‑encoded image",
"mimeType": "string"
}
}
],
"modelPreferences": {
"hints": [{ "name": "string" }],
"costPriority": 0.0,
"speedPriority": 0.0,
"intelligencePriority": 0.0
},
"systemPrompt": "string",
"includeContext": "none|thisServer|allServers",
"temperature": 0.0,
"maxTokens": 0,
"stopSequences": ["string"],
"metadata": {}
}Example: Path of Exile 2 hot‑fix fetcher
The following Python snippet creates a minimal MCP tool that crawls the PoE 2 forum for the latest hot‑fix information.
from typing import Any
from mcp.server.fastmcp import FastMCP
import httpx
from bs4 import BeautifulSoup
mcp = FastMCP("Path of Exile 2 hotfix")
target_url = "https://www.pathofexile.com/forum/view-forum/2212"
async def poe2_hotfix(url: str) -> str | None:
headers = {"User-Agent": "Mozilla/5.0"}
async with httpx.AsyncClient() as client:
try:
resp = await client.get(url, headers=headers, timeout=30.0)
soup = BeautifulSoup(resp.text, "html.parser")
table = soup.find('table')
result = ""
if table:
for row in table.find_all('tr'):
for cell in row.find_all('td'):
result += cell.get_text(strip=True) + "
"
result += "-" * 50 + "
"
return result or None
except Exception:
return None
@mcp.tool()
async def find_poe2_hotfix() -> str:
data = await poe2_hotfix(target_url)
return data or "Unable to find any hotfix"
if __name__ == "__main__":
mcp.run(transport='stdio')Install the SDK with pip install mcp and start the server using mcp dev server.py. The tool appears in the MCP Inspector UI and can be called from any MCP‑compatible client.
Quickstart commands
pip install mcp
mcp dev server.pyExample: Simple chatbot using MCP
The chatbot discovers tools from connected servers, injects their descriptions into the system prompt, and decides whether to call a tool or answer directly.
# Discover tools from all servers
all_tools = []
for server in self.servers:
tools = await server.list_tools()
all_tools.extend(tools)
# System prompt sent to the LLM
system_message = (
"You are a helpful assistant with access to these tools:
"
f"{tools_description}
"
"Choose the appropriate tool based on the user's question. "
"If no tool is needed, reply directly.
"
"When you need to use a tool, respond ONLY with a JSON object of the form: "
'{ "tool": "tool-name", "arguments": { "arg-name": "value" } }'
)Runtime flow:
User input arrives.
The input and tool context are sent to the LLM.
The LLM returns either a tool‑call JSON or a direct answer.
If a tool is called, the server executes it and returns the result.
The LLM formats the result and sends the final response to the user.
Further extensions
Developers can refine prompts, add richer crawlers, or expand the tool set while preserving the protocol’s security and privacy guarantees.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
