Choosing Between LangGraph, create_agent, and Deep Agents: A Three‑Layer Abstraction Guide
The article compares LangGraph, create_agent, and Deep Agents—three abstraction layers in the LangChain ecosystem—explaining their hierarchy, trade‑offs, code examples, suitable scenarios, and common pitfalls to help developers pick the right tool for building AI assistants.
Three‑Layer Hierarchy
LangGraph sits at the lowest level, giving developers full control over the graph engine; create_agent provides a ready‑made ReAct loop that runs as soon as a model and tools are supplied; Deep Agents builds on create_agent and adds memory, toolsets, sub‑agent orchestration, and a virtual file system.
All three solve the same problem—building an AI assistant—but higher layers accelerate development while lower layers offer more fine‑grained adjustability. The recommended principle is to start with the highest abstraction and only drop down to LangGraph when custom control flow is required.
LangChain create_agent
create_agentruns a fixed ReAct cycle (think → tool → execute → observe → think) and requires only a model name and a list of tools.
from langchain.agents import create_agent
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"It's 28°C and sunny in {city}!"
agent = create_agent(
"anthropic:claude-sonnet-4-5",
tools=[get_weather]
)
result = agent.invoke({"messages": [{"role": "user", "content": "What's the weather in Mumbai?"}]})
print(result["messages"][-1].content)The agent automatically decides when to call get_weather, passes the argument, receives the result, and composes a natural‑language reply—no loop code is needed.
A more realistic example shows a customer‑support bot that looks up an order and returns the return policy, demonstrating that multiple tools can be invoked in a single turn.
from langchain.agents import create_agent
from langchain_core.tools import tool
@tool
def lookup_order(order_id: str) -> str:
"""Look up the status of an order."""
orders = {"1234": "Shipped - arrives Friday", "5678": "Processing - ships tomorrow"}
return orders.get(order_id, "Order not found")
@tool
def check_return_policy() -> str:
"""Get the return policy."""
return "You can return any item within 30 days for a full refund."
agent = create_agent(
"openai:gpt-4o",
tools=[lookup_order, check_return_policy],
system_prompt="You are a friendly customer support agent. Always be polite."
)
result = agent.invoke({"messages": [{"role": "user", "content": "Where is my order #1234? And can I return it?"}]})
# The agent calls both tools and merges the answers.If a structured JSON response is needed, the structured_output argument can enforce a Pydantic schema, eliminating manual parsing.
from pydantic import BaseModel
class SupportReply(BaseModel):
answer: str
needs_human: bool
category: str
agent = create_agent(
"anthropic:claude-sonnet-4-5",
tools=[lookup_order, check_return_policy],
structured_output=SupportReply
)
result = agent.invoke({"messages": [...]})
reply = result["structured_output"]
print(reply.needs_human, reply.category)Use create_agent for single‑turn chatbots, API‑driven support agents, or any scenario where a simple tool‑calling loop suffices. It cannot express conditional branches, parallel steps, or persistent state.
Deep Agents
Deep Agents extends create_agent with built‑in memory, a virtual file system, sandboxed code execution, sub‑agent spawning, and long‑term user preferences.
from deepagents import create_deep_agent
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Top results for '{query}': ..."
agent = create_deep_agent(
model="anthropic:claude-sonnet-4-5",
tools=[search_web],
system_prompt="You are a helpful research assistant."
)
result = agent.invoke({"messages": [{"role": "user", "content": "Research LangGraph and write a summary."}]})In a more complex task, the agent writes a Python script, saves it, runs it in a sandbox, spawns a reviewer sub‑agent, fixes issues, and returns the final version—all without the developer writing any orchestration code.
from deepagents import create_deep_agent
from deepagents.tools import FileSystemTool, SandboxTool
agent = create_deep_agent(
model="openai:gpt-4o",
tools=[FileSystemTool(), SandboxTool()],
system_prompt="""You are a senior software engineer.
1. Write the code to a file
2. Run it in the sandbox to test it
3. Spawn a reviewer sub‑agent to critique it
4. Fix any issues found
5. Return the final version""",
enable_subagents=True,
memory_enabled=True,
)
result = agent.invoke({"messages": [{"role": "user", "content": "Write a Python script that scrapes news headlines and saves them to a CSV."}]})Deep Agents retains cross‑session memory, so a user who prefers Python will automatically receive Python code in later interactions.
LangGraph
LangGraph is the raw graph engine underlying both create_agent and Deep Agents. It exposes nodes (functions) and edges (control flow) directly, allowing conditional edges, loops, parallelism, human‑in‑the‑loop pauses, and persistent checkpoints.
from langgraph.graph import StateGraph, MessagesState, START, END
def say_hello(state: MessagesState):
return {"messages": [{"role": "ai", "content": "Hello, world!"}]}
graph = StateGraph(MessagesState)
graph.add_node("say_hello", say_hello)
graph.add_edge(START, "say_hello")
graph.add_edge("say_hello", END)
app = graph.compile()
result = app.invoke({"messages": []})A realistic document‑review workflow demonstrates branching, LLM‑driven classification, and a human‑approval node that pauses execution with interrupt() and resumes later via a checkpoint saver.
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt
def human_approval(state: MessagesState):
"""Pause for human input and resume with the decision."""
human_decision = interrupt({
"message": "Should I approve this document?",
"document": state["messages"][-1].content,
})
return {"messages": [{"role": "ai", "content": f"Human decided: {human_decision}"}]}
app = graph.compile(checkpointer=MemorySaver())
thread_id = {"configurable": {"thread_id": "review-session-1"}}
# Execution will pause at human_approval until a human responds.LangGraph excels when you need custom branches, persistent state across restarts, debugging via time‑travel (replaying checkpoints), or any workflow that cannot be expressed with the higher‑level abstractions.
How to Choose
Ask whether you need custom control flow. If the answer is "no", start with create_agent. If you need branching, parallelism, human‑in‑the‑loop, or persistence, use LangGraph. For long‑chain, multi‑step tasks that require built‑in memory, file access, or sub‑agents, Deep Agents is the most convenient choice.
Same Task, Three Implementations
The article implements a simple "search‑the‑web" task with each framework, showing that the code size grows from ~12 lines ( create_agent) to ~35 lines (LangGraph) while the underlying behavior remains identical.
Common Pitfalls for Beginners
Choosing LangGraph prematurely and writing excessive boilerplate when a simple create_agent would suffice.
Using Deep Agents for a trivial chatbot, unnecessarily adding file‑system and memory overhead.
Treating the three tools as mutually exclusive alternatives rather than layers of the same stack.
Forgetting to attach a MemorySaver (or database checkpoint) when using interrupt(), causing state loss.
Omitting docstrings on tool functions; without them the LLM may never invoke the tool.
Summary
create_agentis a pre‑packaged ReAct loop—plug in a model and tools and you get a functional chat or tool‑calling agent.
Deep Agents adds a full suite of peripherals (virtual file system, sandbox, sub‑agents, long‑term memory) on top of create_agent, making it ideal for complex, multi‑step assistants.
LangGraph is the bare‑bones engine that offers maximal control: custom branches, human‑in‑the‑loop pauses, persistent checkpoints, and parallel flows. Use it when the higher‑level abstractions cannot express your workflow.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
DeepHub IMBA
A must‑follow public account sharing practical AI insights. Follow now. internet + machine learning + big data + architecture = IMBA
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
