Designing Decentralized Multi‑Agent Networks with LangGraph: The Swarm Architecture

This article explains LangGraph's network (decentralized) architecture for multi‑agent systems, compares it with supervisor and hierarchical designs, and provides a step‑by‑step Python example using the langgraph‑swarm library to build agents that can dynamically hand off control and preserve conversation continuity.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Designing Decentralized Multi‑Agent Networks with LangGraph: The Swarm Architecture

Network Architecture Concept

In a network architecture all agents are peers that can establish many‑to‑many connections without a central coordinator. Each agent decides locally which other agent to invoke based on its own state, the overall system goal, and exchanged messages, enabling dynamic, adaptive collaboration.

Typical scenarios that benefit from this pattern are:

Collaborative and dynamic tasks – the optimal interaction order cannot be predetermined, so agents decide actions on the fly.

Decentralized decision‑making – control is distributed, improving fault tolerance because a single agent failure does not cripple the system.

Specialized agents requiring flexible interaction – agents with distinct expertise can exchange information smoothly when needed.

The trade‑off is increased design complexity: developers must manage overall goals, behavior boundaries, and ensure coherent system‑wide behavior despite distributed decisions.

Typical Network Architecture – Swarm

Swarm is a concrete realization of the network pattern. Each agent is an expert in a domain; the system routes dialogue dynamically to the most suitable agent. Swarm records the last activated agent, preserving context across turns.

Core characteristics of Swarm:

Agents with handoff capability – agents can explicitly transfer control and context to another agent better suited for the next sub‑task.

Dynamic routing – routing decisions depend on the current conversation state and each agent’s expertise.

Memory‑based continuity – the framework records which agent handled the previous turn, enabling coherent multi‑turn interactions.

LangGraph Network Architecture Code Walkthrough

Environment Setup

pip install -U langgraph-swarm

Import Dependencies

from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from langchain_deepseek import ChatDeepSeek
from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySaver
from langgraph_swarm import create_handoff_tool, create_swarm
from dotenv import load_dotenv

load_dotenv()

llm = ChatDeepSeek(model="deepseek-chat")

Define Specialist Agents

@tool
def add(a: int, b: int) -> int:
    """Calculate the sum of two integers."""
    print('Agent1 add tool called')
    return a + b

# Agent1: math expert, can hand off to Agent2
agent1 = create_agent(
    model=llm,
    tools=[add, create_handoff_tool(agent_name='agent2', description='When the user wants to talk to agent2, hand over to agent2')],
    system_prompt='You are agent1, a math expert that can use the add function for all calculations.',
    name='agent1'
)

# Agent2: cat‑voice agent, can hand off back to Agent1 for math
agent2 = create_agent(
    model=llm,
    tools=[create_handoff_tool(agent_name='agent1', description='Please hand over any math task to agent1')],
    system_prompt='You are agent2, you speak in a cute cat voice.',
    name='agent2'
)

Create the Swarm Network

checkpointer = InMemorySaver()
workflow = create_swarm(
    [agent1, agent2],
    default_active_agent="agent1"  # agent1 starts active
)
app = workflow.compile(checkpointer=checkpointer)

Run a Test Conversation

config = {'configurable': {'thread_id': '1'}}

# 1st turn: request to talk to agent2
first = app.invoke(
    {'messages': [{'role': 'user', 'content': 'I want to talk to agent2, please hand over'}]},
    config
)
print(first['messages'][-1].content)

# 2nd turn: a math question
second = app.invoke(
    {'messages': [{'role': 'user', 'content': 'What is 100+100?'}]},
    config
)
print(second['messages'][-1].content)

The first invocation activates agent2 after the handoff request; the second invocation routes the math query back to agent1, demonstrating dynamic handoff and memory‑based continuity.

Comparison of the Three Multi‑Agent Designs

Supervisor architecture – Central supervisor plus specialist agents; centralized control, medium complexity, medium scalability, low flexibility; suited for well‑defined coordinated specialist tasks.

Hierarchical architecture – Top‑level supervisor plus team‑lead agents; layered control reduces load per agent, highly modular; high scalability, medium flexibility; suited for complex tasks with many agents requiring organized structure.

Network architecture (Swarm) – Decentralized agents with many‑to‑many communication; distributed control leads to emergent behavior, harder to manage; high scalability, high flexibility; suited for collaborative, dynamic environments with unpredictable workflows.

Conclusion

LangGraph, together with the langgraph-swarm library, provides concrete support for all three architectures. The network (Swarm) pattern offers a decentralized, highly flexible solution for tasks that require dynamic collaboration and emergent behavior. Developers can select the architecture that best matches their problem domain and scalability requirements.

Network ArchitecturePythonSwarmMulti-agentLangGraph
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.