Is MCP Dead? How CLI Is Redefining AI Agent Interactions
The article examines the rise and decline of the Model Context Protocol (MCP), outlines its four critical flaws—including context bloat, architectural complexity, security risks, and passive tool design—while presenting command‑line interfaces (CLI) as a more efficient, secure, and debuggable alternative for AI agents, and discusses hybrid approaches and practical implementations.
Background
Model Context Protocol (MCP) was introduced by Anthropic at the end of 2024 to expose external tools (file system, databases, APIs) to large language models via a standardized client‑server protocol. Within a year its adoption has declined sharply.
1. MCP Architecture
When an MCP server starts it sends a tools/list JSON‑RPC message containing the names, descriptions, and JSON‑Schema of all available tools. The client injects these definitions into the LLM’s system prompt. When the model decides to invoke a tool, the client sends a tools/call message; the server executes the request and returns the result.
2. Critical Limitations of MCP
2.1 Context Bloat
Loading all tool definitions inflates the prompt. Connecting three MCP servers consumes roughly 143 KB of tokens, which occupies about 72 % of a 200 K‑token model’s context window, leaving little room for task data. Empirical studies show tool‑selection accuracy dropping from 43 % to below 14 % as the number of tools grows.
2.2 Architectural Complexity
Each tool runs in a separate process behind a network boundary and requires its own authentication (OAuth2, API keys, personal tokens). Initialization can fail at the model, protocol conversion, network, or downstream service layer, making debugging difficult.
2.3 Security Risks
Indirect prompt injection : malicious content in shared documents or API responses can cause the MCP server to execute unintended commands.
Tool poisoning : a rogue MCP server can register deceptively named tools to lure the model into calling the wrong tool.
Rug‑pull attacks : an attacker first publishes a legitimate MCP server, gains trust, then updates it with malicious code.
Research has identified nearly 7 000 publicly exposed MCP servers, about half of which lack any authentication.
2.4 Passive Tool Design
Tools are “passively exposed”: the AI can only use what the server pre‑registers and cannot discover new commands or more efficient usage patterns on its own.
3. CLI as an Alternative
3.1 Progressive Discovery
CLI tools load definitions on demand. An agent first runs gh --help to discover top‑level commands, then queries sub‑commands such as gh pr --help only when needed. This on‑demand approach reduces token consumption dramatically—empirical tests show CLI to be about 17 × cheaper than MCP while maintaining near‑100 % reliability.
3.2 Pipe‑Based Composition
Unix pipes allow chaining of commands without additional code. For example, the output of one command can be fed directly into the next via the | operator, enabling flexible post‑processing.
3.3 LLMs Naturally Understand CLI
LLMs have been trained on decades of Unix documentation, Stack Overflow answers, and open‑source shell scripts. Consequently they recognize commands such as git, curl, grep, docker, and kubectl without needing explicit tool schemas.
3.4 Strong Debuggability
If a CLI command fails, engineers can rerun the exact command in a terminal to see the same input the model saw, whereas MCP requires inspecting opaque JSON logs.
3.5 Mature Ecosystem
CLI tools provide standardized authentication (OAuth2, API keys), well‑defined exit codes, and stable I/O streams ( /dev/stdout, /dev/stderr), benefiting from decades of engineering practice.
4. Comparative Summary (MCP vs CLI)
Core role : MCP – tell the AI how to connect to external services; CLI – tell the AI how to act directly.
Implementation : MCP – JSON‑RPC server; CLI – native command‑line interface.
Token consumption : MCP – thousands of tokens per tool definition; CLI – on‑demand loading, minimal idle tokens (30‑50 tokens).
Stability : MCP – medium; servers may crash; CLI – very high.
Security : MCP – architectural risks and unauthenticated endpoints; CLI – mature, controllable security model.
Debug difficulty : MCP – high; CLI – very low.
5. Enabling AI to Use CLI
5.1 Install GitHub CLI
# macOS
brew install gh
# Ubuntu/Debian
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null
sudo apt update && sudo apt install gh
# Windows (winget)
winget install --id GitHub.cli5.2 Authenticate
gh auth login5.3 Example: List Open Pull Requests
gh pr list --state open --json number,title --jq '.[] | "\(.number): \(.title)"'5.4 Advanced Pipe Composition
gh pr list --state open --json number,title,author --jq '.[] | select(.title | test("bug")) | "\(.number) by \(.author.login)"'5.5 Bridge Tool: mcpkit
mcpkitis an MCP client that turns any MCP server into a CLI command or lightweight Agent Skill, eliminating context bloat.
npm install -g @balakumar.dev/mcpkit
mcpkit install "npx -y @modelcontextprotocol/server-github" --name github
mcpkit call github search_repositories '{"query":"mcpkit"}'5.6 Bridge Tool: unmcp
unmcpprovides a minimal wrapper to invoke MCP server tools directly from the terminal without the JSON‑RPC overhead.
uvx unmcp
uvx unmcp filesystem read_file --path "/tmp/example.txt"
uvx unmcp filesystem --json read_file --path "/tmp/example.txt"6. Industry Adoption of CLI
Major platforms have released open‑source CLIs: Feishu’s CLI (200+ commands, 19 built‑in Agent Skills), Google’s gws CLI for Workspace, and Zilliz’s CLI for managing Milvus vector databases. The trend shows enterprises preferring CLI + Skills for AI agents because commands are unambiguous, easy to automate, and have lower execution cost than GUI interactions.
Conclusion
MCP remains valuable for scenarios that require a standardized protocol, cross‑platform tool sharing, or multi‑agent collaboration. However, its high token overhead, architectural complexity, and security concerns limit its practicality for many workloads. CLI offers extreme performance, low cost, high stability, and natural compatibility with LLMs. A hybrid architecture—using CLI for frequent, simple tasks and MCP for complex, standardized integrations—leverages the strengths of both approaches, with bridge tools such as mcpkit and unmcp enabling seamless interoperation.
Su San Talks Tech
Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
