Why MCP Is Dead and CLI Is Rising: Perplexity’s Shift Sparks Community Support
Although the Model Context Protocol (MCP) was launched by Anthropic in late 2024 and initially praised, users now report severe context‑window costs, instability, and cumbersome authentication, leading Perplexity and others to abandon it in favor of traditional CLI tools that remain more composable and reliable.
Model Context Protocol (MCP) – core mechanism
MCP defines a set of tools as functions with JSON schemas, injects the definitions into an LLM agent’s context, and allows the agent to invoke the tools by name.
Inherent technical limitations
The injection of each tool consumes tokens for the tool name, description, parameter schema, and examples. Because the cost grows linearly with the number of tools, the context window is quickly exhausted. For example, connecting 10 services with 5 tools each can consume several thousand tokens before the agent begins any useful work.
Before using MCP, practitioners are forced to choose one of three approaches:
Pre‑load all tools – sacrifices inference space, dialogue history, and working memory, degrading task performance.
Limit the number of integrations – restricts the agent to a small subset of services.
Implement dynamic tool loading – adds middleware that selects tools at runtime, introducing latency and architectural complexity.
All three options waste the agent’s most valuable resource: the context window.
Operational issues observed in practice
Unstable initialization – agents often restart when the MCP server fails to start; recovery may require a full state reset.
Repeated authentication – each integrated tool requires a separate auth flow, leading to “endless re‑authentication” when many tools are used.
Binary permission model – agents can whitelist tools by name only; there is no fine‑grained read‑only or parameter‑level restriction.
CLI and API as a proven alternative
LLMs have been trained on millions of man pages, Stack Overflow answers, and shell‑script repositories, enabling them to use traditional command‑line interfaces effectively. A CLI tool’s documentation (function, parameters, usage) is sufficient for the model to generate correct calls.
When an unexpected action occurs, the same CLI command can be run directly to reproduce the model’s view. Example: jira issue view Under MCP the tool exists only as JSON inside the LLM conversation; errors require inspecting complex JSON logs, whereas CLI output is immediate and human‑readable.
CLI tools are composable: output can be piped through jq, filtered with grep, or redirected to files, enabling flexible workflows for both humans and agents. To achieve comparable functionality with MCP, the entire plan must be packed into the context window or custom filtering logic must be built inside the MCP server, both of which increase effort and reduce performance.
Decades of design iteration have produced CLI tools that are debuggable, composable, and integrate with existing authentication systems, making them a sufficient abstraction without the overhead of MCP.
Reference links:
https://ejholmes.github.io/2026/02/28/mcp-is-dead-long-live-the-cli.html
https://x.com/dzhng/status/2029518820872945889
Machine Learning Algorithms & Natural Language Processing
Focused on frontier AI technologies, empowering AI researchers' progress.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
