Why MCP Is Poised to Replace Function Calling for LLM Agents

The Model Context Protocol (MCP) introduced by Anthropic addresses the scalability, integration, and context‑transfer limitations of traditional Function Calling by offering a standardized, bidirectional, and context‑aware communication layer that simplifies tool discovery, security, and workflow orchestration for LLM‑driven agents.

AI Cyberspace
AI Cyberspace
AI Cyberspace
Why MCP Is Poised to Replace Function Calling for LLM Agents

MCP vs. Function Calling

Before MCP, the dominant approach was Function Calling, which suffers from two critical issues: lack of standardized system integration and inefficient context transmission.

System integration standardization : Agents need to call external systems, but without a standard, each tool requires custom code, leading to high maintenance complexity. OpenAI Function Calling works only with specific LLMs and platforms, making reuse difficult.

Context transmission optimization : Maintaining complex context‑updating logic between the Agent and LLM is cumbersome and inefficient.

In November 2024, Anthropic released MCP (Model Context Protocol) as an open standard centered on Agent & LLM communication, simplifying integration architecture and context handling.

MCP System Architecture

MCP provides a standardized architecture with several benefits:

High scalability and flexibility : New tools and services can be added without changing existing workflows.

Automatic discovery of tools and services : After registration, LLMs can discover available resources automatically.

Reduced development complexity : Abstracted interaction logic lets developers focus on core agent functionality.

Enhanced security : Uses mature secure transport protocols.

Collective intelligence : Standardized channels enable distributed Agentic AI systems to achieve results unattainable by a single architecture.

Layered Architecture

MCP sits between the LLM and the Agent.

Integration Architecture

The integration architecture consists of three core components: MCP Host, MCP Client, and MCP Server.

MCP Host

Definition: The runtime environment of an Agent that wishes to access external systems via MCP.

Provides a user interface for the Agent’s services.

Configures required MCP components via a config.json file.

Starts the MCP Client to use MCP Server functionality.

Enforces security boundaries and user authorizations.

MCP Client

Definition: A functional module of the Agent that maintains a 1:1 protocol connection with the MCP Server, acting as a bridge.

Connection management: establishes, maintains, and closes the link, handling heartbeats and errors.

Request forwarding: forwards Agent/LLM requests to the Server and receives responses.

Error handling: reports communication errors to the upper layer.

Capability discovery: queries the Server for available resources, tools, and prompts.

The Host and Client collaborate closely; the Host supplies the execution environment and security boundary, while the Client implements the protocol.

MCP Server

Definition: An independent, lightweight server program that serves as the front‑end for external systems, providing data access, tool execution, and service invocation to the LLM.

Capability abstraction via Resources, Tools, and Prompts.

Publishes capabilities to the MCP Client.

Translates and forwards requests to external APIs, databases, files, SSE streams, or the Internet.

Local / Remote Resources

MCP defines two core resource types:

Local Resources : Resources residing in the Host’s local environment (files, databases, applications) accessed securely via local network or protocol.

Remote Resources : Resources accessed over the Internet via Web APIs (cloud storage, remote services, etc.).

MCP Communication Protocol

The protocol defines transport, message formats, and communication rules, offering stateful, context‑preserving interactions.

Real‑time bidirectional communication : Unlike traditional request‑response REST APIs, MCP supports dynamic, two‑way message exchange.

Superior context awareness : Provides a standardized context container that maintains long‑term session memory for coherent LLM responses.

Context transmission performance : Compresses a 10 KB JSON context from >500 ms to ~20 ms and supports incremental updates.

Transport Layer

MCP uses JSON‑RPC 2.0 and supports multiple transport mechanisms:

Stdio – suitable for local, same‑machine communication via stdin/stdout.

HTTP over SSE – legacy method for networked real‑time updates (now replaced).

Streamable HTTP – the current method allowing multiple concurrent clients, supporting both GET (SSE stream) and POST requests, with optional Mcp-Session-Id for stateful sessions.

Stdio

Used for local integration where the client and server communicate through OS pipes.

HTTP over SSE (Deprecated)

Established a long‑living connection with two endpoints (SSE for server‑to‑client pushes and HTTP POST for client‑to‑server messages). It required persistent connections, session affinity, and reconnection handling, leading to its replacement by Streamable HTTP after 2025‑03‑26.

Streamable HTTP

The Server exposes an HTTP endpoint supporting GET and POST. Clients open an SSE stream via GET with Accept: text/event-stream, then send JSON‑RPC requests via POST, optionally including Mcp-Session-Id for stateful interactions.

Protocol Layer

Connection Lifecycle Management

Manages initialization, version negotiation, capability exchange, user authorization, and permission control.

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "0.3.0",
    "clientInfo": {"name": "example-client", "version": "1.0.0"},
    "capabilities": {"resources": {}, "tools": {}, "prompts": {}}
  }
}

The Server responds with its supported capabilities or an error if versions are incompatible.

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "0.3.0",
    "serverInfo": {"name": "example-server", "version": "1.0.0"},
    "capabilities": {"resources": {"list": {}, "read": {}}, "tools": {"list": {}, "call": {}}}
  }
}

After successful negotiation, the client sends an initialized notification.

{
  "jsonrpc": "2.0",
  "method": "notifications/initialized",
  "params": {}
}

Message Exchange

Once a session is established, the client can request the list of available tools/resources, and the LLM can incorporate this information into its reasoning.

Feature Layer

The Feature Layer provides business‑level capabilities for Agents and LLMs.

MCP Client : Supports sampling, root directory definitions, etc.

MCP Server : Exposes Resources, Tools, and Prompts.

Resources

Data sources such as files, database records, or API responses. Each resource has a URI, name, description, and MIME type.

List resources: resources/list Read a resource: resources/read Subscribe/unsubscribe to updates: resources/subscribe / resources/unsubscribe URI format examples:

file:///home/user/documents/report.pdf
postgres://database/customers/schema
http://api.example.com/data
weather://forecast?city=beijing&days=5

Tools

Executable functions with JSON‑Schema‑defined inputs, discovered via tools/list and invoked via tools/call. Human‑in‑the‑loop approval is required for safety.

{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": {"location": "New York"}
  }
}

Prompts

Template prompts that guide LLM output, supporting parameter rendering and resource context embedding. Discovered via prompts/list and retrieved with prompts/get.

MCP Context Window

MCP maintains a dynamic context window that stores conversation history and environment data, expanding with each interaction. Non‑essential information can be compressed into embeddings to avoid overload.

MCP Security and Trust Mechanisms

Security is a core principle: MCP adopts a multi‑layer model ensuring local‑first execution, explicit user consent for data access, tool approval, sampling control, resource boundaries, permission‑based execution, and encrypted transport (TLS, OAuth 2.0, RBAC).

MCP Workflow Example

The Host queries the Server for available tools.

The Host feeds the tool list to the LLM, which selects a tool.

The Host sends a tool‑execution request to the Server and receives the result.

The result is fed back to the LLM, which produces the final response for the user.

Practical Application: Claude Desktop MCP

Claude Desktop uses MCP as both Host and Client. Configuration resides in claude_desktop_config.json, where mcpServers defines commands and arguments for each server.

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/fanguiju/Documents"]
    }
  }
}

After restarting the application, the FileSystem MCP Server provides operations such as read_file, write_file, list_directory, etc., all requiring user confirmation for safety.

MCP Registries

Multiple open‑source registries host MCP tools and servers, enabling developers to publish and discover capabilities. Examples include GitHub servers repo , MCP World , MCP.so , and others.

Overall, MCP offers a standardized, secure, and extensible framework that can replace fragmented Function Calling implementations, streamline agent‑LLM integration, and accelerate AI application development.

LLMMCPAgentFunction CallingProtocolAI integration
AI Cyberspace
Written by

AI Cyberspace

AI, big data, cloud computing, and networking.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.