Unlocking Anthropic’s New Agent Toolkit: MCP, PTC, Skills, and Subagents Explained

This article breaks down Anthropic’s latest Agent engineering concepts—MCP, Programmatic Tool Calling (PTC), Skills, and Subagents—showing how they work together to reduce latency, token cost, and context overload while enabling modular, scalable AI workflows.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Unlocking Anthropic’s New Agent Toolkit: MCP, PTC, Skills, and Subagents Explained

Anthropic has introduced two new concepts for building LLM‑driven agents: Programmatic Tool Calling (PTC) and Skills , which complement the existing Model Context Protocol (MCP) . While MCP standardises how agents access external resources, PTC and Skills aim to make complex tool‑calling sequences more efficient and knowledge‑rich.

MCP + PTC: A Connected Toolbox

MCP, first released at the end of 2024, acts like a "USB‑C" for AI, allowing developers to expose databases, APIs, or file systems via an MCP server that any MCP‑compatible agent can consume without writing glue code. The main drawback of naïve tool‑calling is the "ping‑pong" effect: LLM inference → tool call → result inserted into context → another inference, which leads to high latency and token cost.

PTC solves this by letting the LLM generate a complete Python script that runs in a sandbox, embedding multiple tool calls, loops, conditionals, and calculations in a single execution. Example code:

orders = await toolA.query_orders()
chart = await toolB.generate_chart(orders)
return chart

This single‑shot execution reduces round‑trip latency and token usage compared to the step‑by‑step approach.

Skills: Knowledge Capsules for Agents

Skills are modular packages that inject domain‑specific knowledge into an agent. Each Skill consists of a SKILL.md descriptor, related scripts, and auxiliary resources (templates, examples). When a user asks a task like "extract text from an uploaded PDF," the agent discovers the appropriate Skill, loads its full description, and follows the prescribed steps.

The core mechanism is Progressive Disclosure : the agent first learns the Skill name and description from metadata, then loads the full Skill only when needed, avoiding context overload.

Subagents: Divide‑and‑Conquer Architecture

Subagents break a complex task into independent subtasks, each running in its own isolated context with dedicated prompts, models, tools, and Skills. This prevents context pollution, reduces token waste, and enables role‑specific configurations. Example Subagent definitions (Python‑style):

subagents_config = {
    'security-auditor': AgentDefinition(
        description='Expert in identifying security vulnerabilities (OWASP Top 10).',
        prompt='You are a rigorous security auditor. Focus ONLY on SQL injection, XSS, and auth bypass.',
        tools=['read_file', 'grep'],
        model='claude-3-5-sonnet-20240620'
    ),
    'test-runner': AgentDefinition(
        description='Executes test suites and reports results.',
        prompt='You are a QA engineer. Run tests, analyze logs, and report pass/fail rates.',
        tools=['bash', 'read_file'],
        model='claude-3-haiku-20240307'
    )
}

Subagents are not unique to Claude; similar patterns appear in frameworks like LangChain’s DeepAgents.

Comparison, Differences, and Synergy

MCP + PTC provides the low‑level infrastructure for reliable, fast external calls. Skills sit on top, offering reusable knowledge capsules for well‑defined tasks. Subagents orchestrate multiple Skills and tool calls, handling large, multi‑step workflows while keeping each agent’s context clean.

Typical usage patterns:

Use MCP when an agent needs direct access to a database or API.

Use Skills for repeatable, domain‑specific operations such as document conversion or report generation.

Deploy Subagents when a task requires several specialised roles (e.g., code review, testing, data analysis) that would otherwise overload a single agent.

End‑to‑End Example

The main agent receives a complex request: generate a full project report, perform code review, update documentation, and summarise results.

The main agent delegates code review to a "Review Subagent" with its own context and tools.

The Subagent uses a Skill to format the review output into Markdown.

If data from a company database is needed, the Subagent (or Skill) calls the MCP + PTC toolbox.

Each Subagent returns a concise result; the main agent aggregates them into the final response.

This layered approach combines the efficiency of MCP + PTC, the knowledge encapsulation of Skills, and the organisational benefits of Subagents, delivering scalable, maintainable AI agents.

LLMMCPSkillsAnthropicPTC
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.