Unlocking AI Agent Integration with Model Context Protocol (MCP): A Complete Guide
This article explains how the Model Context Protocol (MCP) standardizes AI agent communication with external tools, outlines its benefits, describes its core components, showcases open‑source implementations, and provides step‑by‑step Python examples for building MCP servers and clients.
Model Context Protocol (MCP) Overview
With the rise of general‑purpose AI agents such as OpenManus and owl, the MCP protocol is being deeply integrated into agent frameworks. In large‑model scenarios, the LLM acts as the brain and integrates multiple tools, turning a pure "think‑tank" into a "think‑tank + execution team".
LLMs integrate with external tools via Function Calling , a mechanism where the AI model automatically executes functions based on context, acting as a “run‑around assistant” for the expert model.
Different models implement Function Calling differently, leading to non‑standard APIs and repeated development. Each external function also requires a JSON‑Schema description and a carefully crafted prompt template to improve response accuracy.
To address these issues, Anthropic released the open‑source Model Context Protocol (MCP) in November 2024, aiming to simplify AI‑to‑external‑data‑source connections by providing a universal access interface that eliminates the need for custom integration code.
01 AI Agent and MCP Relationship
MCP (Model Context Protocol) standardizes the interaction between AI models and data sources/tools, similar to how USB‑C standardizes hardware connections. Developers can write a Function once according to the MCP spec and share it with others, turning isolated function development into collaborative, reusable components.
02 What Is MCP?
MCP, introduced by Anthropic, is a protocol that defines how AI models (clients) communicate with external services (servers). It unifies naming (MCP Client for the LLM environment, MCP Server for functions) and development conventions, allowing developers to share tools such as weather queries, web crawling, or database access across projects.
03 MCP Open‑Source Implementations
Several open‑source implementations exist, including official and community servers. Notable resources:
Name
Address
Introduction
MCP协议
https://modelcontextprotocol.io/introduction
官方 MCP 协议以及主流语言实现
Awesome MCP Servers
https://mcpservers.org/
列出了多个可用的 MCP 服务器,涵盖从生产级到实验性的实现,重点关注文件访问和工具集成。
Dify Connect MCP
https://marketplace.dify.ai/plugins/hjlarry/mcp-server
提供完整开源代码并兼容 MCP Inspector,便于调试和优化。
04 MCP Basic Architecture
Host (宿主) : The application embedding the LLM (e.g., chatbot, IDE plugin) that decides when external information or actions are needed.
Client (客户端) : Runs inside the host, maintains the connection to MCP servers, translates host requests into MCP messages, and receives responses.
Server (服务器) : Independent programs that expose capabilities (Resources, Tools, Prompts) via the MCP protocol. Resources provide data (files, API responses), Tools execute actions (database queries, file reads), and Prompts are predefined templates for specific tasks.
05 MCP Hands‑On Example (Python)
Below is a minimal MCP server that fetches weather data and a client that discovers and calls the server’s tool.
Server (FastMCP)
<code># 创建项目目录
uv init client
cd client
# 添加依赖
uv add mcp openai python-dotenv httpx
import json
import httpx
from typing import Any
from mcp.server.fastmcp import FastMCP
import html_to_json
# 初始化 MCP 服务器
mcp = FastMCP("WeatherServer")
OPENWEATHER_API_BASE = "https://www.tianqi24.com/{}/history{}{:0>2d}.html"
USER_AGENT = "weather-app/1.0"
async def fetch_weather(city: str, year: int, month: int) -> dict[str, Any] | None:
url = OPENWEATHER_API_BASE.format(city, year, month)
headers = {"User-Agent": USER_AGENT}
async with httpx.AsyncClient() as client:
try:
response = await client.get(url, headers=headers, timeout=30.0)
response.raise_for_status()
return response.content.decode('utf-8')
except httpx.HTTPStatusError as e:
return {"error": f"HTTP 错误: {e.response.status_code}"}
except Exception as e:
return {"error": f"请求失败: {str(e)}"}
def format_weather(data: dict | str, city: str, year: int, month: int) -> str:
if isinstance(data, str):
try:
json_dict = html_to_json.convert(data)
x = json_dict['html'][0]['body'][0]['section'][0]['section'][0]['section'][0]['article'][1]['section'][0]['ul'][0]['li'][1:]
month_dict = {}
for ele in x:
_ = ele['div']
_month, _day = _[0]['_value'].split('-')
month_dict[_day] = {
'天气': _[1]['_value'] if len(_[1]) == 1 else '转'.join([_[1]['b'][0]['_value'], _[1]['span'][0]['_value'][2:]]),
'最低温': _[3]['_value'].split('℃')[0],
'最高温': _[2]['_value'].split('℃')[0],
'AQI': _[4]['_value'],
'风向': _[5]['_value'],
'降雨量': _[6]['_value']
}
except Exception as e:
return f"无法解析天气数据: {e}"
if "error" in data:
return f"⚠️ {data['error']}"
# Simplified output (example)
return f"🌍 {city}\n🌡 最高温: {month_dict[_day]['最高温']}°C\n🌡 最低温: {month_dict[_day]['最低温']}°C\n💧 AQI: {month_dict[_day]['AQI']}%"
@mcp.tool()
async def query_weather(city: str, year: int = 2025, month: int = 5) -> str:
"""Return weather information for the specified city."""
data = await fetch_weather(city, year, month)
return format_weather(data, city, year, month)
if __name__ == "__main__":
mcp.run(transport='stdio')
</code>Client
<code>import asyncio
import os
from openai import OpenAI
from dotenv import load_dotenv
from contextlib import AsyncExitStack
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import json, sys
load_dotenv()
class MCPClient:
def __init__(self):
"""Initialize MCP client"""
self.exit_stack = AsyncExitStack()
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.base_url = os.getenv("BASE_URL")
self.model = os.getenv("MODEL")
if not self.openai_api_key:
raise ValueError("❌ 未找到 OpenAI API Key,请在 .env 文件中设置 OPENAI_API_KEY")
self.client = OpenAI(api_key=self.openai_api_key, base_url=self.base_url)
self.session: Optional[ClientSession] = None
async def connect_to_server(self, server_script_path: str):
"""Start MCP server and list available tools"""
is_python = server_script_path.endswith('.py')
is_js = server_script_path.endswith('.js')
if not (is_python or is_js):
raise ValueError("服务器脚本必须是 .py 或 .js 文件")
command = "python" if is_python else "node"
server_params = StdioServerParameters(command=command, args=[server_script_path], env=None)
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
self.stdio, self.write = stdio_transport
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
await self.session.initialize()
response = await self.session.list_tools()
print("\n已连接到服务器,支持以下工具:", [tool.name for tool in response.tools])
async def process_query(self, query: str) -> str:
messages = [{"role": "system", "content": "你是一个智能助手,帮助用户回答问题。"},
{"role": "user", "content": query}]
response = await self.session.list_tools()
available_tools = [{"type": "function", "function": {"name": tool.name, "description": tool.description, "input_schema": tool.inputSchema}} for tool in response.tools]
completion = self.client.chat.completions.create(model=self.model, messages=messages, tools=available_tools)
choice = completion.choices[0]
if choice.finish_reason == "tool_calls":
tool_call = choice.message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
result = await self.session.call_tool(tool_name, tool_args)
messages.append(choice.message.model_dump())
messages.append({"role": "tool", "content": result.content[0].text, "tool_call_id": tool_call.id})
final = self.client.chat.completions.create(model=self.model, messages=messages)
return final.choices[0].message.content
return choice.message.content
async def chat_loop(self):
print("\n🤖 MCP 客户端已启动!输入 'quit' 退出")
while True:
try:
query = input("\n你: ").strip()
if query.lower() == 'quit':
break
response = await self.process_query(query)
print(f"\n🤖 OpenAI: {response}")
except Exception as e:
print(f"\n⚠️ 发生错误: {str(e)}")
async def cleanup(self):
await self.exit_stack.aclose()
async def main():
if len(sys.argv) < 2:
print("Usage: python client.py <path_to_server_script>")
sys.exit(1)
client = MCPClient()
try:
await client.connect_to_server(sys.argv[1])
await client.chat_loop()
finally:
await client.cleanup()
if __name__ == "__main__":
asyncio.run(main())
</code>Run the client and server together:
<code>uv run client.py server.py</code>For debugging, Anthropic provides the Inspector tool, which can launch servers with inspection capabilities:
<code>npx-y @modelcontextprotocol/inspector uv run server.py</code>Open the inspector UI at http://127.0.0.1:5173/ to monitor tool execution.
06 Summary
Beyond the stdio mode, MCP also supports SSE‑based remote communication, enabling more flexible deployment scenarios. As an "AI‑USB‑C", MCP accelerates agent development by providing a universal, secure, and reusable interface for tools, resources, and prompts. The growing ecosystem of open‑source MCP servers and frameworks promises rapid advancement of LLM‑driven applications.
Instant Consumer Technology Team
Instant Consumer Technology Team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.