Understanding Model Context Protocol (MCP) vs. Function Calling
The Model Context Protocol (MCP), announced by Anthropic, standardizes how AI applications provide context to LLMs, offering a client‑server architecture that simplifies data and tool integration, and is compared with function calling, highlighting its benefits, workflow, controversies, and future prospects.
01 What is MCP and how it differs from Function Calling
MCP and Function Calling are both techniques for extending AI interaction with external data, but they differ in scope, implementation, and ecosystem support. Function Calling is a model‑vendor‑specific feature that enables dynamic tool calls, while MCP is an open protocol that standardizes integration for any AI model and other applications.
02 MCP Architecture Overview
MCP follows a client‑server architecture, allowing a host application to connect to multiple servers for modularity and security.
The main components are:
Standardization : unify how AI models interact with external data or tools.
Simplified development : lower the complexity for developers to connect data sources.
Enhanced capability : transform AI from a pure text generator into a system that can handle file operations, data queries, and external system interactions.
2.1 MCP Hosts
Programs that need data access, such as Claude Desktop, IDEs, or other AI tools.
2.2 MCP Clients
MCP Clients act as a bridge between the LLM and MCP Servers, maintaining a one‑to‑one connection with the server.
The typical workflow includes:
Initialize connection : the client establishes a connection with the server and performs authentication.
Retrieve available functions : the client queries the server for tools, resources, and prompts.
Prepare context : the client packages the user query and function metadata into structured data and sends it to the LLM.
LLM decision : the LLM decides whether to invoke a tool, access a resource, or use a prompt, and generates the appropriate request.
Tool execution : the client forwards the request to the server; the server may ask the user for authorization for sensitive operations.
Result feedback : the server returns execution results or resource data, which the client passes back to the LLM.
Generate response : the LLM synthesizes all information into a natural‑language answer.
Display to user : the client shows the final response in its interface.
2.3 MCP Servers
MCP Servers are the core of the protocol, providing external data and operational capabilities to clients and LLMs via standardized interfaces. They support three main functions:
Resources : readable data such as local files, database records, or API responses that can be passed to the LLM as context.
Tools : executable functions (e.g., database queries, network requests, file modifications) that the LLM can invoke through the client, typically requiring user authorization for safety.
Prompts : predefined templates or instructions that guide the LLM to perform specific tasks, standardizing interactions like “summarize this document” or “generate a Python code snippet”.
2.4 Local Data Sources
Files, databases (e.g., SQLite), or services that are safely accessible on the user’s computer.
2.5 Remote Services
External systems reachable over the internet, such as APIs.
03 Controversy: Standard or Fad?
Despite its theoretical potential, MCP’s practical value is debated. In March 2025, LangChain CEO Harrison Chase and LangGraph lead Nuno Campos engaged in a heated discussion.
Supporters’ view : Chase argues that MCP lowers the technical barrier, allowing non‑developers to add tools to agents without deep code changes. As foundational models improve, MCP’s tool‑calling capabilities could become more powerful and align with future trends.
Critics’ view : Campos points out that MCP’s added complexity (e.g., bidirectional communication) raises development costs, and current tool‑calling success rates hover around 50 %, questioning its practicality. He notes that low success may stem from models not being optimized for specific tool sets, and that the MCP ecosystem is still immature with few real‑world examples.
04 Final Summary
In practice, MCP offers a standardized way to extend AI applications, simplifying data and tool integration—especially useful for cross‑platform, multi‑source scenarios. However, its complexity and the current reliability of model‑tool interactions may limit short‑term adoption. The low success rate also reflects broader challenges in LLMs’ ability to understand complex contexts. If the protocol is refined and paired with stronger base models, MCP could become an important tool in the AI toolbox.
https://modelcontextprotocol.io/introduction</code>
<code>https://www.anthropic.com/news/model-context-protocol</code>
<code>https://github.com/modelcontextprotocol/serversSigned-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Ma Wei Says
Follow me! Discussing software architecture and development, AIGC and AI Agents... Sometimes sharing insights on IT professionals' life experiences.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
