Industry Insights 15 min read

Why CLI Is Emerging as the Native Language for AI Agents Over Heavy Protocols

In early 2026 the AI community witnessed a sharp shift away from Model Context Protocol (MCP) toward CLI‑first toolchains, as engineers highlight token inflation, fragmented authentication, and loss of composability in MCP, while praising the low‑friction, text‑based, and easily debuggable nature of command‑line interfaces for building robust AI agents.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Why CLI Is Emerging as the Native Language for AI Agents Over Heavy Protocols

Background: A Sudden Turn in Early 2026

Early 2026 saw a dramatic change in the AI tooling landscape: major figures like Perplexity CTO Denis Yarats announced a move from MCP to API/CLI, YC partner Garry Tan publicly criticized MCP, and enterprise collaboration products such as Feishu and DingTalk began adopting CLI‑first approaches. New coding agents like Claude Code and OpenClaw also embraced CLI‑first designs, signaling an industry‑wide vote for practical engineering over heavyweight protocols.

Three Fatal Flaws of MCP

1. Token Inflation and Cost Black Hole

MCP requires schemas for each tool to be packed into the model context. As the number of tools grows from a handful to dozens or hundreds, the token cost skyrockets because the entire schema set must be loaded at startup, consuming tens of thousands of tokens before any work begins. This "token inflation" turns the context window into a costly pre‑flight check rather than useful reasoning space.

2. Fragmented Authentication and Authorization

Each integrated service demands its own OAuth flow, API key rotation, tenant mapping, and secret management. By contrast, CLI tools can reuse mature OS‑level mechanisms such as SSH keys, environment variables (e.g., AWS_ACCESS_KEY_ID, GITHUB_TOKEN), and local config files ( ~/.ssh/config, ~/.gitconfig, ~/.aws/credentials), which have been battle‑tested for decades.

3. Black‑Box Runtime and Lost Composability

MCP typically communicates via JSON‑RPC, producing deep nested JSON payloads that are hard to read and debug. Developers must reconstruct state machines mentally, turning the debugging experience into a “JSON dump” nightmare. CLI, on the other hand, offers straightforward exit codes, stderr logs, and text streams that fit naturally into Unix pipelines, preserving the composability that agents need.

Why CLI Is the Agent’s Mother Tongue

3.1 Agents Can Read Documentation Directly

Modern agents can parse --help output, man pages, and README files without needing a pre‑defined schema, allowing them to discover tool capabilities on the fly.

3.2 On‑Demand Loading Reduces Token Overhead

Instead of loading all schemas at launch, CLI‑first designs fetch documentation only when a tool is actually used, keeping token consumption minimal.

3.3 Text‑Based Output Aligns with LLM Strengths

LLMs excel at processing plain text. CLI tools naturally emit logs, errors, diffs, and configuration files as text, and many provide structured formats like JSON ( --json), YAML ( --yaml), or tables ( --format table), which are easily consumable by agents.

# Example: count hotfix commits per author
git log --pretty=oneline | grep "hotfix" | awk '{print $2}' | sort | uniq -c | sort -nr

Side‑by‑Side Comparison (Key Dimensions)

Context Cost : MCP loads massive schemas upfront (high token cost); CLI reads help on demand (low token cost).

Tool Scale : MCP’s schema size grows with tool count; CLI scales naturally because each command has a uniform interface.

Auth Integration : MCP requires per‑service auth plumbing; CLI reuses OS/SSH/env mechanisms.

Debug Experience : MCP’s JSON‑RPC chain is opaque; CLI provides immediate exit codes, stderr, and logs.

Composability : MCP composes at the protocol layer (sticky); CLI composes via pipes, redirects, and scripts.

Target Audience : MCP suits platform‑level governance; CLI serves front‑line developers needing rapid iteration.

Final Thoughts: Connection Over Protocol

In the AI era, the ability to connect tools outweighs the elegance of a universal protocol. MCP attempts to build a “glass house” around agents, while CLI throws agents onto the construction site with all the tools, manuals, and debugging aids already in place.

Practical Advice for Agent Toolchain Builders

6.1 Ask Yourself: Protocol or Connection?

If governance and standardization are your primary goals, a protocol may help.

If seamless execution and rapid composition matter more, prioritize direct connections via CLI.

6.2 Don’t Treat Schema as the Default

Use schemas only when strong type guarantees are essential; otherwise let agents explore CLI interfaces.

6.3 Make Composability a First Principle

Ensure output can be consumed by the next tool (e.g., JSON).

Avoid non‑machine‑readable decorations like emojis or colored progress bars.

6.4 Treat Debugging as a Product Feature

Leverage CLI’s mature stderr/exit‑code model; avoid the extra mental load of debugging JSON‑RPC pipelines.

Overall, the shift from MCP to CLI‑first reflects a pragmatic move toward low‑friction, text‑centric, and easily debuggable engineering practices that better serve real‑world AI agents.

engineeringCLIAI agentsLLMMCPProtocolToolchain
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.