Integrating Anthropic‑Style Skills into LangChain DeepAgents: A Step‑by‑Step Guide

This article explains how to bring Anthropic's Skills concept into the open‑source LangChain DeepAgents framework by detailing the discovery, system‑prompt injection, progressive loading, and execution phases, and provides a complete code‑driven example using a web‑research Skill.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Integrating Anthropic‑Style Skills into LangChain DeepAgents: A Step‑by‑Step Guide

Background

Skills are reusable knowledge capsules stored in a folder containing at least a SKILL.md file with name, description and SOP. Loading a Skill only when needed reduces token usage and improves execution stability.

Target framework: LangChain DeepAgents

DeepAgents is LangChain’s open‑source framework for multi‑step agents and already supports Skills via its middleware system.

Implementation

The integration follows four stages:

Discover Skills in a configured directory (e.g., ~/.deepagents/agent/skills) and build a SkillMetadata list from the YAML header of each SKILL.md.

Inject the metadata list into the system prompt using a SkillsMiddleware that implements before_agent and wrap_model_call.

When the LLM decides to use a Skill, issue a read_file call to load the full SKILL.md (progressive loading).

Execute the SOP with tools such as read_file, write_file, task or a custom shell tool.

SkillsMiddleware example

class SkillsMiddleware(AgentMiddleware):
    def before_agent(self, state: SkillsState, runtime: Runtime) -> SkillsStateUpdate | None:
        # generate Skill metadata list
        ...

    def wrap_model_call(self, request: ModelRequest,
                       handler: Callable[[ModelRequest], ModelResponse]) -> ModelResponse:
        # append Skills section to system_prompt
        if request.system_prompt:
            system_prompt = request.system_prompt + "

" + skills_section
        else:
            system_prompt = skills_section
        return handler(request.override(system_prompt=system_prompt))

Shell tool definition

@tool('shell', description='Execute a shell command')
def shell_tool(command: str, runtime: ToolRuntime[None, AgentState]) -> ToolMessage | str:
    try:
        result = subprocess.run(
            command,
            check=False,
            shell=True,
            capture_output=True,
            text=True,
            timeout=self._timeout,
            env=self._env,
            cwd=self._workspace_root,
        )
        return result.stdout
    except Exception as e:
        return str(e)

Warning: Shell execution is high‑risk; it should be guarded by human‑in‑the‑loop approval or sandboxing, with limits on runtime, output size and allowed commands.

Full agent construction

SYSTEM_PROMPT = """You are an agent equipped with multiple Skills to help users complete tasks."""

def make_backend(runtime):
    return CompositeBackend(
        default=FilesystemBackend(),
        routes={
            "/fs/": FilesystemBackend(root_dir="./fs", virtual_mode=True),
            "/memories/": StoreBackend(runtime),
        },
    )

skills_middleware = SkillsMiddleware(skills_dir=USER_SKILLS_DIR, assistant_id="agent")
shell_middleware = ShellMiddleware(workspace_root=WORKSPACE_ROOT, timeout=120.0, max_output_bytes=100_000)

research_subagent = {
    "name": "search-agent",
    "description": "Agent for web search and research.",
    "system_prompt": "You are an intelligent web‑search and research agent.",
    "tools": [search, fetch_url],
    "model": model,
}

agent = create_deep_agent(
    model=model,
    tools=[],
    subagents=[research_subagent],
    backend=make_backend,
    middleware=[skills_middleware, shell_middleware],
    system_prompt=SYSTEM_PROMPT,
    debug=True,
).with_config({"recursion_limit": RECURSION_LIMIT})

Concrete example: web‑research Skill

Steps to test:

Start the agent with langgraph dev.

Copy the web‑research Skill from the DeepAgents CLI examples into the user Skills directory.

Submit a query.

Observe the execution trace: metadata injection, LLM selects the skill, reads SKILL.md, writes a research plan, dispatches parallel sub‑tasks to a research sub‑agent, aggregates results and produces a final report.

Benefits of Skills

Complex task decomposition: explicit step‑by‑step guidance improves reliability for multi‑step tasks.

Efficient context usage: progressive disclosure saves token budget.

Easy sharing and maintenance: Skills are plain Markdown plus scripts, suitable for version control.

Continuous learning: agents can package newly discovered procedures as Skills for future reuse.

Source code: https://github.com/pingcy/deepagents-demo

LLMprompt engineeringTool IntegrationLangChainAgent SkillsDeepAgents
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.