How General‑Purpose Agents Are Converging on Claude Code and Deep Agent Designs
The article analyzes the 2025 shift toward a unified "general‑type" agent architecture exemplified by Claude Code and Deep Agent, detailing industry adoption, core technical features, skill‑based extensions, long‑running capabilities, and practical steps for building domain‑specific agents.
Introduction
The author reflects on the "Agent Year" of 2025, summarizing a year of hands‑on experience with Claude Code, Deep Agent, and related technologies, and argues that the architectural debate has settled on a general‑type agent model represented by Claude Code and Deep Agent.
What Is a Deep Agent?
A Deep Agent must satisfy two key characteristics:
Industry‑specific depth : The agent must embed domain knowledge derived from deep practice and consensus, such as detailed SOPs, case studies, and industry tacit rules.
Long‑running stability : The agent must run for extended periods without crashing and handle multi‑step, tool‑heavy tasks reliably.
Examples include a recruitment agent that generates professional background reports and a marketing agent that selects KOLs and provides pricing quotes, both requiring precise inputs, tasks, and evaluation criteria.
Agent Skills and Progressive Disclosure
Anthropic’s Agent Skills are presented as a hierarchical file system where a SKILL.md file contains required metadata (name, description) and the core content, while additional resources (e.g., forms.md) are loaded on demand. This progressive disclosure reduces context load and enables dynamic skill discovery.
Key benefits:
Better context management by loading only needed information.
Engineers stay in a "business flow" state, abstracting high‑level logic while the model executes details.
High reusability: skills are simple folders that can be copied across projects.
More stable code execution compared with large tool lists.
Relevant repositories: https://github.com/anthropics/skills/tree/main/skills.
Long‑Running Techniques
Four methods from LangGraph enable agents to run for long durations without failure:
Continuous operation without crashes (e.g., Claude playing Pokémon for 24 hours).
Multi‑step task execution with extensive tool calls (e.g., planning a multi‑city trip across calendars, emails, and travel sites).
Sub‑agents provide context isolation, parallel execution, specialized tool sets, and token‑efficient result aggregation.
File‑system utilities allow agents to offload large context to files, share workspaces, and maintain long‑term memory.
Building a Deep Agent – Two Dimensions
Dimension 1: Seamlessly Embedding Business Knowledge
Common approaches (prompt‑only, RAG) are brittle. Anthropic’s 2025 Agent Skills offer a smoother solution by packaging instructions, scripts, and resources that agents can load on demand.
Dimension 2: Ensuring Long‑Running Reliability
LangGraph proposes four layered tool architectures:
Atomic Layer : ~20 core, orthogonal tools (read/write files, bash, etc.) for stability.
Sandbox Utilities Layer : Uses a generic bash tool to invoke any installed program, avoiding a bloated tool list.
Code/Packages Layer : Encapsulates complex logic in reusable Python packages, reducing round‑trips.
Higher‑level orchestration : Combines the layers with progressive disclosure to keep the context window lean.
Context Compression and Hierarchical Tool Calls
When token usage reaches ~80 % of a limit (e.g., 200 k tokens), a summarization model automatically compresses earlier context. Hierarchical tool calls (atomic → sandbox → code) prevent context confusion and improve token efficiency.
Convergence of Agent Architectures
By late 2025, the field converged around the Claude Agent SDK and Deep Agent, featuring a main‑agent/sub‑agent hierarchy, planning capabilities, and a file‑system for persistent state. Additional innovations include automatic context compression and layered tool design.
Practical Migration from Workflow to Agent
To upgrade existing workflows, mimic Claude’s Deep Research prompt structure: define a detailed system prompt for the main agent, a sub‑agent for execution, and optionally a post‑processing agent. Using the latest SOTA models (Claude 4.5, Gemini 3, GPT‑5.2) maximizes success, but lower‑tier models can still be used with simplified tasks.
Future Outlook
The author anticipates continued evolution of agent‑native models (e.g., DeepSeek v3.2, Kimi 2, Gemini 3) and encourages experimentation with the described techniques throughout 2026.
Baobao Algorithm Notes
Author of the BaiMian large model, offering technology and industry insights.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
