Building Multi‑Agent Systems with LangGraph: A Step‑by‑Step Guide

This article walks through implementing a multi‑agent workflow using LangGraph, comparing it with the lightweight Swam framework, and detailing the code for defining models, tools, agents, graph structures, testing, and evaluating the framework's strengths, limitations, and suitable use cases.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Building Multi‑Agent Systems with LangGraph: A Step‑by‑Step Guide

Background and Comparison with Swam

In a previous article we introduced five popular multi‑agent orchestration frameworks and demonstrated a simple system using OpenAI's lightweight Swam framework, which consists of a supervising agent and two task‑executing agents. Swam is easy to learn (about ten minutes) and supports only OpenAI models, but it lacks flexibility, scalability, and the ability to handle complex agent workflows.

Swam framework diagram
Swam framework diagram

Why Use LangGraph?

LangGraph, released by the LangChain team, targets complex Retrieval‑Augmented Generation (RAG), agent, and multi‑agent applications. It provides fine‑grained control over LLM workflows, enabling loops, conditional branches, and state management, which are essential for reliable, predictable AI systems in production environments.

Key motivations:

Support for complex, iterative LLM workflows and multi‑agent collaboration.

Improved controllability and predictability compared to black‑box agents.

LangGraph workflow illustration
LangGraph workflow illustration

Core Features of LangGraph

Supports parallel execution, conditional branching, loops, and other fine‑grained workflow controls.

Flexible node definitions: simple functions, direct LLM calls, or full agent interactions.

Persistent global state allowing pause, resume, or human intervention.

Implementation Steps

1. Define the Base Model and State

# llm
llm = ChatOpenAI(model="gpt-4o-mini")

# State (LangGraph context structure)
class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]
    next: str

2. Define Tools

# Two tools for the agents
search_tool = TavilySearchResults(max_results=5)

@tool
def mail_tool(subject: str, body: str, recipient: str = None) -> str:
    """Send an email"""
    recipient = recipient or "[email protected]"
    print(f"Sending email to {recipient}, subject: {subject}, body: {body}")
    return f"Sent email to {recipient} with subject '{subject}' and body '{body}'"

3. Define Agents and Their Nodes

# Supervisor agent decides the next step
def supervisor_agent(state):
    messages_content = "
".join([f"{msg.name}: {msg.content}" for msg in state['messages']])
    prompt = (
        "Evaluate the user query and split tasks, then decide the next AI assistant. Options: Researcher, Emailer, FINISH.
"
        "Messages:
" + messages_content + "
"
        "Next step? Choose from ['FINISH', 'Researcher', 'Emailer'] without extra characters."
    )
    response = llm.invoke(prompt)
    return {"next": response.content.strip()}

# Researcher agent uses the search tool
def research_agent(state):
    researcher = create_react_agent(llm, tools=[search_tool])
    result = researcher.invoke(state)
    return {"messages": [HumanMessage(content=result["messages"][-1].content, name="Researcher") ]}

# Emailer agent uses the mail tool
def email_agent(state):
    emailer = create_react_agent(llm, tools=[mail_tool])
    result = emailer.invoke(state)
    return {"messages": [HumanMessage(content=result["messages"][-1].content, name="Emailer") ]}

4. Build the Graph (Workflow)

# Define workflow
workflow = StateGraph(AgentState)
workflow.add_node("Supervisor", supervisor_agent)
workflow.add_node("Researcher", research_agent)
workflow.add_node("Emailer", email_agent)

workflow.add_edge(START, "Supervisor")
workflow.add_edge("Researcher", "Supervisor")
workflow.add_edge("Emailer", "Supervisor")
workflow.add_conditional_edges(
    "Supervisor",
    lambda x: x["next"],
    {
        "Researcher": "Researcher",
        "Emailer": "Emailer",
        "FINISH": END,
    },
)

graph = workflow.compile()

5. Test the Application

for s in graph.stream({
    "messages": [
        HumanMessage(content="搜索明天的南京天气情况,发送邮件给[email protected]", name="User")
    ]
}):
    if "__end__" not in s:
        print(s)
        print("--------------------")

The streamed output shows the Supervisor selecting the Researcher, then the Emailer, and finally finishing the task.

LangGraph execution output
LangGraph execution output

Pros and Cons

Advantages

Powerful enough for virtually any complex LLM scenario, including advanced RAG, looping agents, and orchestrated multi‑agent systems.

Highly flexible and extensible; easy to integrate with existing applications.

Seamless integration with LangSmith, LangGraph Studio, and other LangChain ecosystem tools for production‑grade deployments.

Independent framework with broad third‑party LLM, vector store, and API tool support.

Strong community backing inherited from LangChain.

Disadvantages

Steeper learning curve; less beginner‑friendly.

Heavyweight abstraction can make debugging and tracing more cumbersome.

When to Choose LangGraph

Building enterprise‑level applications that require high reliability.

Needing custom, complex LLM workflow orchestration.

Planning for future extensibility and modularity.

Having a solid foundation in LLM application development.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonAILLMMulti-AgentLangGraph
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.