DeepAgents Quickstart Guide: A Full Walkthrough of Core Features
This article introduces LangChain's DeepAgents framework, explains its design goals, compares it with LangChain and LangGraph, and provides a step‑by‑step code walkthrough that demonstrates task planning, sub‑agent delegation, tool usage, and result generation for building complex AI agents with just a few lines of code.
DeepAgents Overview
Loop‑based agents that repeatedly call a language model work for simple tasks but encounter unreasonable steps, tool‑call errors, and context overflow on complex, multi‑step problems. DeepAgents (v0.4.3) packages common capabilities—task planning, note‑taking, file system, long‑term memory, and sub‑agent coordination—so developers can focus on business logic and build sophisticated agents with only a few lines of configuration.
Positioning of DeepAgents
LangGraph : the kernel that schedules workflows, persists state, and provides monitoring.
LangChain : a high‑level SDK built on LangGraph that offers ready‑made functions such as create_agent.
DeepAgents : an additional layer that adds intelligent modules—task planning, file system, long‑term memory, and sub‑agent management—exposed through the core function create_deep_agent.
When to Use Each Framework
Simple step‑by‑step tasks: use create_agent from LangChain 1.0.
Fine‑grained custom workflows: build directly with LangGraph 1.0.
Complex, long‑running agents requiring planning, file handling, and memory: choose DeepAgents.
DeepAgents Code Walkthrough
The official deep_research example builds a research agent that can search, think, and delegate work to a sub‑agent. The script is only a few dozen lines and centers on the call to create_deep_agent.
from datetime import datetime
from langchain.chat_models import init_chat_model
from langchain_google_genai import ChatGoogleGenerativeAI
from deepagents import create_deep_agent
from research_agent.prompts import (
RESEARCHER_INSTRUCTIONS,
RESEARCH_WORKFLOW_INSTRUCTIONS,
SUBAGENT_DELEGATION_INSTRUCTIONS,
)
from research_agent.tools import tavily_search, think_tool
max_concurrent_research_units = 3
max_researcher_iterations = 3
current_date = datetime.now().strftime("%Y-%m-%d")
INSTRUCTIONS = (
RESEARCH_WORKFLOW_INSTRUCTIONS + "
" + "=" * 80 + "
" +
SUBAGENT_DELEGATION_INSTRUCTIONS.format(
max_concurrent_research_units=max_concurrent_research_units,
max_researcher_iterations=max_researcher_iterations,
)
)
research_sub_agent = {
"name": "research-agent",
"description": "Delegate research to the sub‑agent researcher. Only give this researcher one topic at a time.",
"system_prompt": RESEARCHER_INSTRUCTIONS.format(date=current_date),
"tools": [tavily_search, think_tool],
}
model = init_chat_model(model="anthropic:claude-sonnet-4-5-20250929", temperature=0.0)
agent = create_deep_agent(
model=model,
tools=[tavily_search, think_tool],
system_prompt=INSTRUCTIONS,
subagents=[research_sub_agent],
)Core Components
create_deep_agent : factory that assembles a deep agent with a base model, tools, system prompt, and sub‑agents. Internally it provides a task planner, file system, sub‑agent manager, and long‑term memory.
System Prompt Construction : concatenates RESEARCH_WORKFLOW_INSTRUCTIONS (overall workflow) and SUBAGENT_DELEGATION_INSTRUCTIONS (delegation rules with concurrency and iteration limits). The sub‑agent prompt RESEARCHER_INSTRUCTIONS embeds the current date for time‑aware queries.
Sub‑Agent Definition : a dictionary specifying name, description, system_prompt, and tools. The description is read by the main agent to decide when to delegate.
Tools : tavily_search (structured web search) and think_tool (deliberation after each search).
Design Principles
Encapsulation of Common Capabilities : planning, sub‑agent management, and file handling are hidden inside create_deep_agent, so developers do not write LangGraph nodes.
Simplified Development : tasks that would require hundreds of lines of LangGraph code are expressed in a few dozen lines of configuration.
Modular Composition : main agents, sub‑agents, and tools are independent modules that can be recombined for different domains such as data analysis or code generation.
Execution Flow Example
Given the user request
research context engineering approaches used to build AI agents, the agent proceeds as follows:
Task Planning : the built‑in write_todos tool creates a todo list (e.g., "design research request") and stores it in research_request.md.
Iterative Execution : after each step, write_todos updates the task status to completed and triggers the next step, such as invoking a sub‑agent to gather specific research content.
Result Aggregation : once all steps finish, write_file writes the final report to final_report.md.
Running Example
Invocation:
result = agent.invoke({
"messages": [{"role": "user", "content": "research context engineering approaches used to build AI agents"}]
})
format_messages(result["messages"])1. Task Planning : write_todos generates a plan and saves it to research_request.md.
2. Step Execution & State Update : each completed step updates its status to completed via write_todos and the agent proceeds to the next step, often delegating to the sub‑agent.
3. Result Summarization : after all tasks are marked completed, write_file writes the aggregated research report to final_report.md.
Conclusion
DeepAgents extends LangChain’s create_agent with middleware and built‑in tools that enable task planning, sub‑agent coordination, file system management, and long‑term memory. By calling create_deep_agent, developers can construct sophisticated, multi‑step agents with minimal code while retaining the flexibility of LangGraph underneath.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Fun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
