How to Quickly Integrate Agent Skills in LangChain DeepAgents

This article provides a step‑by‑step guide to using Agent Skills in LangChain DeepAgents, covering the Skills directory structure, the four engineering steps (discovery, system‑prompt injection, progressive loading, execution), and two practical examples—a simple skill lookup and a complex docx‑processing skill—complete with code snippets and troubleshooting tips.

Fun with Large Models
Fun with Large Models
Fun with Large Models
How to Quickly Integrate Agent Skills in LangChain DeepAgents

1. Skills Quick Review

Skills, introduced by Anthropic, are lightweight, open‑format modules that package domain‑specific knowledge for agents. Each Skill is a folder containing at least a SKILL.md file and optionally scripts, references, and assets subfolders.

my-skill/
├── SKILL.md   # required: name + description
├── scripts/   # optional executable code
├── references/ # optional docs
└── assets/    # optional resources

The SKILL.md must define non‑empty name and description fields. The description explains the Skill’s purpose, while the body can detail procedures, examples, and required tools.

2. DeepAgents Skills Overview

2.1 Using Skills in DeepAgents

DeepAgents bundles the full discovery‑activate‑execute pipeline. Developers only need to define a Skill and pass its directory path to create_deep_agent:

agent = create_deep_agent(
    model=llm,
    skills=["/skills"]  # path to the Skills folder
)
agent.invoke("What skills do you have?")

2.2 Engineering Steps for Skills

Discovery & Identification : The FileSystemMiddleware scans the configured directory, reads each SKILL.md, and extracts the YAML header ( name and description) into a SkillMetadata list.

System‑Prompt Injection : All Skill metadata are concatenated into a prompt segment and injected into the system prompt so the LLM sees available Skills at the start of every turn.

Progressive Loading : When the model decides to use a particular Skill, the full SKILL.md content (and optional resources) is read and added to the context.

Task Execution & Completion : The agent follows the detailed instructions in SKILL.md, invokes any required tools (e.g., scripts, file reads), and returns the final result.

2.3 DeepAgents Implementation Details

DeepAgents builds on LangChain 1.0’s create_agent and adds two middlewares: FileSystemMiddleware handles directory traversal and metadata extraction. SkillsMiddleware injects the metadata into the system prompt (via the before_agent hook) and appends the prompt segment in wrap_model_call so the model always sees the Skill list.

3. DeepAgents Skill Hands‑On

3.1 Simple Skill Example

Download the official example Skill from GitHub and place it under the project’s skills folder:

https://github.com/langchain-ai/deepagents/tree/main/libs/cli/examples/skills/langgraph-docs

Python setup:

from dotenv import load_dotenv
from langchain_deepseek import ChatDeepSeek
from deepagents import create_deep_agent
from langgraph.checkpoint.memory import MemorySaver
from deepagents.backends.filesystem import FilesystemBackend

load_dotenv()
model = ChatDeepSeek(model="deepseek-chat")
checkpointer = MemorySaver()
agent = create_deep_agent(
    model=model,
    backend=FilesystemBackend(root_dir="./", virtual_mode=True),
    skills=["./skills/"],
    checkpointer=checkpointer,
)
result = agent.invoke({"messages": [{"role": "user", "content": "What is langgraph?"}]},
                     config={"configurable": {"thread_id": "12345"}})
print(result)

The agent correctly identifies the langgraph-docs Skill, loads its description, and answers the query.

3.2 Complex Skill Example (Docx Processing)

Obtain a docx‑processing Skill from SkillHub, unzip it into the skills folder, and configure a LocalShellBackend to run any required scripts.

from pathlib import Path
from dotenv import load_dotenv
from langgraph.checkpoint.memory import MemorySaver
from langchain_deepseek import ChatDeepSeek
from deepagents.backends import LocalShellBackend
from deepagents import create_deep_agent

load_dotenv()
model = ChatDeepSeek(model="deepseek-chat")
checkpointer = MemorySaver()
root_dir = Path.cwd().as_posix()
backend = LocalShellBackend(root_dir=root_dir, inherit_env=True, timeout=120, max_output_bytes=100000)
system_prompt = '''
## Role
You are a professional, efficient AI assistant capable of integrating knowledge and tools.
## Core Tasks
- Answer user questions using available skills.
- Follow the priority: accuracy > usefulness > brevity > friendliness.
- Clarify ambiguous queries and decompose complex ones.
## Notes
read_file does not support Windows absolute paths; use POSIX format like /xxx/xxx/SKILL.md.
'''
agent = create_deep_agent(
    model=model,
    backend=backend,
    skills=[root_dir + r'/skills'],
    system_prompt=system_prompt,
    checkpointer=checkpointer,
)
while True:
    question = input('Enter query (q to quit): ')
    if not question:
        continue
    if question == 'q':
        break
    for type, chunk in agent.stream({"messages": [{"role": "user", "content": question}]},
                                 config={"configurable": {"thread_id": "12345"}},
                                 stream_mode=["updates"]):
        # Display loading of Skills and tool calls (omitted for brevity)
        pass
'''

Running the agent with the prompt “Write a 100‑word joke and save it to 笑话.docx” triggers the doc Skill, which generates the joke and creates the document, as shown in the screenshots below.

4. Summary

The article fully dissects DeepAgents’ Agent Skill mechanism. By combining SkillsMiddleware and FileSystemMiddleware, DeepAgents implements a complete workflow: discovery, system‑prompt injection, progressive loading, and execution. The four‑step engineering process is demonstrated with a simple skill lookup and a more involved docx‑processing skill, highlighting path configuration and prompt‑design considerations for reliable Skill usage.

PythonLangChainAgent SkillDeepAgentsFileSystemMiddlewareSkillsMiddleware
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.