Designing Scalable Multi‑Agent Systems with LangGraph: Architectures, Communication, and Code Samples

This article explains why large‑language‑model agents become hard to manage, outlines the benefits of modular multi‑agent designs, compares several connection architectures, and provides concrete LangGraph code for supervisor‑based, tool‑calling, and custom workflow patterns.

JavaEdge
JavaEdge
JavaEdge
Designing Scalable Multi‑Agent Systems with LangGraph: Architectures, Communication, and Code Samples

Agent systems that rely on large language models (LLMs) quickly become complex, leading to poor tool‑selection decisions, overwhelming context, and the need for multiple domain experts such as planners, researchers, or mathematicians.

Why Split Into Multiple Agents?

Too many tools cause bad selection.

Context grows beyond a single agent's capacity.

Different specialties are required.

Dividing an application into smaller, independent agents creates a multi‑agent system. Each agent can be as simple as a prompt + LLM call or as sophisticated as a ReActAgent.

Benefits of Multi‑Agent Systems

Modularity : Easier development, testing, and maintenance.

Specialization : Expert agents improve overall performance.

Control : Explicit routing between agents avoids hidden function calls.

Architectural Patterns

Several ways to connect agents are illustrated, including a diagram.

Network : Every agent can talk to any other agent (full mesh).

Supervisor : A supervising LLM decides which agent to invoke next.

Supervisor (Tool Call) : Agents are exposed as tools; the supervisor calls them via tool‑calling LLM.

Hierarchy : A supervised hierarchy extends the supervisor pattern.

Custom Workflow : Agents communicate only with a subset, yielding deterministic partial flows.

Network

Agents are graph nodes with full‑mesh communication, but scalability suffers as the number of agents grows.

Hard to enforce next‑agent selection.

Difficult to control information flow.

Recommendation: avoid this pattern for large systems.

Supervisor

A supervisor node (LLM) routes decisions to appropriate agents, supporting parallel execution or map‑reduce style processing.

from typing import Literal
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START

model = ChatOpenAI()

class AgentState(MessagesState):
    next: Literal["agent_1", "agent_2"]

def supervisor(state: AgentState):
    response = model.invoke(...)
    return {"next": response["next_agent"]}

def agent_1(state: AgentState):
    response = model.invoke(...)
    return {"messages": [response]}

def agent_2(state: AgentState):
    response = model.invoke(...)
    return {"messages": [response]}

builder = StateGraph(AgentState)
builder.add_node(supervisor)
builder.add_node(agent_1)
builder.add_node(agent_2)

builder.add_edge(START, "supervisor")
# Route based on supervisor decision
builder.add_conditional_edges("supervisor", lambda state: state["next"])
builder.add_edge("agent_1", "supervisor")
builder.add_edge("agent_2", "supervisor")

supervisor = builder.compile()

See the tutorial for a full supervisor‑based example.

Supervisor (Tool Call)

Agents are exposed as tools; the supervisor uses a tool‑calling LLM to decide which tool (agent) to run.

from typing import Annotated
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import InjectedState, create_react_agent

model = ChatOpenAI()

def agent_1(state: Annotated[dict, InjectedState]):
    tool_message = ...
    return {"messages": [tool_message]}

def agent_2(state: Annotated[dict, InjectedState]):
    tool_message = ...
    return {"messages": [tool_message]}

tools = [agent_1, agent_2]
supervisor = create_react_agent(model, tools)

Custom Multi‑Agent Workflow

Define explicit or dynamic control flow using ordinary or conditional edges in LangGraph.

Explicit flow (ordinary edges): predetermined sequence of agent calls.

Dynamic flow (conditional edges): LLM decides next step, similar to the supervisor‑tool pattern.

from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START

model = ChatOpenAI()

def agent_1(state: MessagesState):
    response = model.invoke(...)
    return {"messages": [response]}

def agent_2(state: MessagesState):
    response = model.invoke(...)
    return {"messages": [response]}

builder = StateGraph(MessagesState)
builder.add_node(agent_1)
builder.add_node(agent_2)
# Define explicit sequence
builder.add_edge(START, "agent_1")
builder.add_edge("agent_1", "agent_2")

Agent Communication Considerations

Key questions when building multi‑agent systems:

Is communication via graph state or tool calls?

How to handle agents with different state schemas?

Should agents share a full message history or only final results?

Graph State vs. Tool Calls

Most architectures use graph state; tool‑call variants pass tool parameters as payload.

Different State Schemas

Agents can have private state either by using sub‑graphs with separate schemas or by defining node functions with their own input schema.

Shared Message List

Sharing the entire message history ("draft pad") improves reasoning but can blow up memory; sharing only final results reduces overhead for large systems.

For tool‑calling agents, the supervisor determines input based on tool specifications, and LangGraph can pass parent state to child tools at runtime.

Conclusion

Multi‑agent architectures enable modularity, specialization, and explicit control for LLM‑driven applications. Selecting the right connection pattern—network, supervisor, supervisor‑tool, hierarchy, or custom workflow—depends on scalability needs and communication requirements. LangGraph provides flexible primitives to implement all these patterns with concrete Python examples.

ArchitecturePythonLLMMulti-agentSupervisorLangGraph
JavaEdge
Written by

JavaEdge

First‑line development experience at multiple leading tech firms; now a software architect at a Shanghai state‑owned enterprise and founder of Programming Yanxuan. Nearly 300k followers online; expertise in distributed system design, AIGC application development, and quantitative finance investing.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.