LangGraph Agent Design Patterns Part 3: Supervisor and Hierarchical Architectures
This article explains LangGraph's multi‑agent design patterns, focusing on the Supervisor Architecture for centralized coordination and the Hierarchical Architecture for scalable team‑based management, and provides step‑by‑step code examples that demonstrate how to implement both patterns.
Multi‑Agent Design Patterns in LangGraph
Modularity, specialization, and controlled communication enable large‑model applications to handle complex business logic more efficiently than a single monolithic agent.
Supervisor Architecture
Concept
A central Supervisor Agent receives a task, decides which Specialist Agent should handle each sub‑task, routes communication through itself, and aggregates the results. This yields high predictability and manageability for structured, multi‑step tasks.
Comparison with Coordinator‑Worker Workflow
Abstraction level : Supervisor Architecture defines a high‑level system organization of multiple agents; Coordinator‑Worker describes low‑level task decomposition.
Core focus : Supervisor Architecture manages agents and system coordination; Coordinator‑Worker focuses on task splitting and execution efficiency.
Component nature : Supervisor Architecture uses autonomous agents with decision‑making capabilities; Coordinator‑Worker uses predefined model calls or functions.
Code Walk‑through
Install the helper library: pip install langgraph-supervisor Set environment variables (e.g., DEEPSEEK_API_KEY) and import required modules:
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from langchain_deepseek import ChatDeepSeek
from langchain.agents import create_agent
from langgraph_supervisor import create_supervisor
from dotenv import load_dotenv
load_dotenv()
llm = ChatDeepSeek(model="deepseek-chat")Define two specialist agents with dedicated tools:
@tool
def add(a: float, b: float) -> float:
"""Add two numbers"""
return a + b
@tool
def multiply(a: float, b: float) -> float:
"""Multiply two numbers"""
return a * b
@tool
def web_search(query: str) -> str:
"""Simulated web search returning 2025 employee counts"""
if "google" in query.lower():
return "2025年谷歌的员工数是182545人"
elif "facebook" in query.lower() or "meta" in query.lower():
return "2025年Facebook(Meta)的员工数是67043人"
else:
return "未找到相关信息"
math_agent = create_agent(
model=llm,
tools=[add, multiply],
system_prompt="你是一个数学智能体,负责处理数字计算任务。",
name='math_agent'
)
research_agent = create_agent(
model=llm,
tools=[web_search],
system_prompt="你是一个研究智能体,负责处理信息搜索任务。",
name='research_agent'
)Create the supervisor workflow with a prompt that directs the supervisor to choose the appropriate specialist:
supervisor_prompt = """你是主管智能体,负责协调和管理两个专业智能体:
- math_agent(数学智能体):负责数字计算,包括加法和乘法
- research_agent(研究智能体):负责信息搜索,特别是网络搜索
根据用户的问题,决定调用哪个智能体:
- 如果需要搜索信息,调用 research_agent
- 如果需要进行数学计算,调用 math_agent
- 任务完成后返回 FINISH
请确保按照合理的顺序调用智能体。例如,先获取数据再进行计算。"""
workflow = create_supervisor(
[math_agent, research_agent],
model=llm,
prompt=supervisor_prompt,
)
app = workflow.compile()
result = app.invoke({"messages": [HumanMessage(content="2025年谷歌和Facebook的员工数总数是多少?")]})
print(result['messages'][-1].content)The execution shows the supervisor first invoking the research agent to fetch employee counts, then the math agent to sum them, finally returning the correct total.
Hierarchical Architecture
Concept
When a single supervisor must manage many specialists, decision paths become unwieldy. The hierarchical architecture introduces intermediate Team Supervisors that manage groups of specialists, and a top‑level supervisor that coordinates the team supervisors. This flattens the organization, improves scalability, and distributes decision‑making load.
Code Walk‑through
Reuse the same environment and specialist definitions, then create two team‑level supervisors:
# Research team (math + research agents)
research_team_prompt = """你是研究团队的主管,负责协调以下智能体:
- math_agent(数学智能体)
- research_agent(研究智能体)
根据任务需求决定调用哪个智能体,完成后返回 FINISH。"""
research_team_supervisor = create_supervisor(
[math_agent, research_agent],
model=llm,
prompt=research_team_prompt,
)
research_team = research_team_supervisor.compile(name='research_team')
# Writing team (writing + publishing agents)
writing_team_prompt = """你是写作团队的主管,负责协调以下智能体:
- writing_agent(写作智能体)
- publishing_agent(发布智能体)
按照顺序先调用 writing_agent 再调用 publishing_agent,完成后返回 FINISH。"""
writing_team_supervisor = create_supervisor(
[writing_agent, publishing_agent],
model=llm,
prompt=writing_team_prompt,
)
writing_team = writing_team_supervisor.compile(name='writing_team')Define the top‑level supervisor that selects between the two teams:
top_supervisor_prompt = """你是最高主管智能体,负责协调两个专业团队:
- research_team(研究团队)负责数据获取和计算
- writing_team(写作团队)负责报告撰写和发布
根据用户请求决定调用哪个团队,完成后返回 FINISH。"""
workflow = create_supervisor(
[research_team, writing_team],
model=llm,
prompt=top_supervisor_prompt,
)
app = workflow.compile()
result = app.invoke({"messages": [HumanMessage(content="请帮我研究2025年谷歌和Facebook的员工数总数,然后生成报告并发布。") ]})
print(result['messages'][-1].content)The run demonstrates full hierarchical coordination: the top supervisor delegates to the research team (which calls the research and math agents), then to the writing team (which generates and publishes the report).
Core Advantages of Multi‑Agent Systems
Modularity : Decompose a monolithic agent into independent sub‑agents that can be developed, tested, and maintained separately.
Specialization : Each agent can be optimized for a specific domain (e.g., research, mathematics, planning), avoiding performance dilution.
Controlled communication : Explicit interaction protocols and information‑exchange formats ensure predictability, stability, and traceability for multi‑step workflows.
Summary
The supervisor architecture provides a centralized, predictable coordination model suitable for structured tasks. To address scalability limits, the hierarchical architecture adds intermediate team supervisors and a top‑level coordinator, enabling efficient handling of larger, more complex workflows.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Fun with Large Models
Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
