What Is MCP and How It Revolutionizes AI Tool Integration

This article explains the MCP protocol for AI agents, detailing why a universal tool‑calling standard is needed, how it solves the M×N integration nightmare, the roles and execution stages involved, and demonstrates its use with Cherry Studio while highlighting current limitations.

Wuming AI
Wuming AI
Wuming AI
What Is MCP and How It Revolutionizes AI Tool Integration

MCP Overview

MCP (Model‑Center Protocol) is a client‑server protocol that lets an AI agent invoke external tools in a standardized way.

Why MCP Is Needed

Solving the M×N Integration Nightmare

Before MCP, each AI application had to be hard‑coded to each external tool, requiring M × N integration modules for M applications and N tools. MCP introduces a universal protocol—analogous to the USB‑C standard—so each AI app implements an MCP client once and each tool implements an MCP server once, unlocking the whole ecosystem with a single integration.

Standardising Tool Discovery and Use

Standardised Interface: Provides a unified framework for seamless interaction between LLMs and tools.

Tool Discoverability: Defines a common description format, hosting model and exposure method, allowing LLMs to automatically discover and understand tool capabilities.

Reliability and Executability: Guarantees that tools are reliable, discoverable and executable, and adds built‑in approval and audit workflows.

Extending AI Capability Boundaries

Large language models are powerful but not omnipotent; many tasks require external operations. MCP expands model capabilities by enabling tool calls for data fetching and complex actions.

How MCP Works

Core Roles

User: Issues the original request.

MCP Host (client, e.g., Cursor, Cherry Studio, Qoder): Mediates between the user and the server.

LLM (e.g., DeepSeek): Decides which tool to use.

MCP Server: Hosts a toolbox of MCP tools (e.g., web search, file read, database query) that perform the actual work.

Execution Process

Stage 1 – Demand & Analysis (Steps 1‑3)

Input: User asks a question that requires external data, e.g., “What’s the weather today?”

Select MCP tool: The LLM recognises it lacks the information and decides to use a weather‑query tool.

Return MCP tool: The LLM tells the MCP client, “I want to use the ‘weather query’ tool.”

Stage 2 – Security Approval (Step 4)

MCP tool approval: The system asks the user for permission to let the AI call the selected tool. The workflow proceeds only after the user clicks “Agree.”

Stage 3 – Execution & Feedback (Steps 5‑8)

Request tool call: After approval, the client sends a request to the MCP server.

Invoke MCP tool: The server runs the chosen tool (e.g., fetches weather data).

Tool output: The tool returns its result, such as “Beijing is sunny, 25 °C.”

Return output: The server packages the result and sends it back to the client.

Stage 4 – Reporting (Steps 9‑10)

Send output and query to LLM: The client forwards both the original question and the tool’s result to the LLM.

Generate answer: The LLM composes a natural‑language response, e.g., “Today’s weather in Beijing is sunny with a temperature of 25 °C.”

Theory Meets Practice

Demo platform: Cherry Studio (https://www.cherry-ai.com/).

Configuration steps:

Enable MCP in the settings and paste a ModelScope key.

Activate desired MCP services, such as the 12306 ticket‑query server ( https://www.modelscope.cn/mcp/servers/@Joooook/12306-mcp) and a time‑service server (

https://www.modelscope.cn/mcp/servers/@modelcontextprotocol/time

).

In Cherry Studio’s UI, add the server, enable it, and select the appropriate MCP when chatting.

Example request payload sent by Cherry Studio:

{
  "model": "google/gemini-3-pro-preview",
  "temperature": 1,
  "messages": [
    {
      "role": "system",
      "content": "In this environment you have access to a set of tools you can use to answer the user's question..."
    }
  ]
}

Concrete tool definitions exposed by the 12306 MCP server (XML‑style):

<tools>
  <tool>
    <name>tool-12306_M-get_current_date_06mcp</name>
    <description>获取当前日期,以上海时区(Asia/Shanghai, UTC+8)为准,返回格式为 "yyyy-MM-dd"。</description>
    <arguments>{"jsonSchema":{"type":"object","properties":{},"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}}</arguments>
  </tool>
  <tool>
    <name>tool-12306_M-get_stations_code_in_city_06mcp</name>
    <description>通过中文城市名查询该城市所有火车站的名称及其对应的 station_code。</description>
    <arguments>{"jsonSchema":{"type":"object","properties":{"city":{"type":"string","description":"中文城市名称"}},"required":["city"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}}</arguments>
  </tool>
  ... (other tool definitions omitted for brevity) ...
</tools>

Tool‑use rules (excerpt):

Always provide correct arguments; never pass variable names.

Call a tool only when needed; otherwise answer directly.

Avoid repeating a tool call with identical parameters.

Use the XML tag format shown in examples; other formats are invalid.

Current MCP Challenges

When many MCP tools are enabled, sending all their metadata to the LLM consumes a large number of tokens, and a proliferation of similar‑looking tools increases the risk of selecting the wrong one.

Most existing MCP implementations expose only the “tool” component of the protocol; they lack the “resource” and “prompt” components that would make the tools more AI‑friendly. Many wrappers simply re‑expose traditional web APIs without optimisation for LLM consumption.

LLMMCPTool IntegrationAI AgentProtocolTool CallingCherry Studio
Wuming AI
Written by

Wuming AI

Practical AI for solving real problems and creating value

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.