How to Use LangGraph Conditional Edge for Dynamic Branching Decisions
This article explains the concept of Conditional Edge in LangGraph, shows how to add conditional edges with three parameters, demonstrates rule‑based, multi‑branch, and loop routing patterns, compares rule‑based versus LLM‑based routing, provides a complete customer‑service agent example, and lists common pitfalls and best‑practice checklists.
LangGraph’s Conditional Edge enables runtime decision‑making between nodes, allowing a graph to ask "what's the next step?" after a node finishes and then jump to one of several possible successors based on the answer.
01 What is Conditional Edge and Why It Matters
A fixed edge always jumps from node A to node B unconditionally. A Conditional Edge first invokes a router function after node A finishes, then routes to B, C, or END based on the router’s return value. The essential difference is that Fixed Edge has a static path, while Conditional Edge determines the path at runtime and can have 1‑N branches.
普通边:
NodeA ──────────────> NodeB
条件边:
NodeA ──> router() ─┬─> NodeB
├─> NodeC
└─> END02 addConditionalEdges: Three Parameters Explained
The only method to add a conditional edge is addConditionalEdges (JS) or add_conditional_edges (Python). Its signature is:
graph.addConditionalEdges(
sourceNode, // the node where routing starts
routerFn, // function(state) → next node name
edgeMapping // optional map from router return values to node names
)If the router returns a node name directly, edgeMapping can be omitted. When you want to use semantic labels (e.g., "tool_call"), you must provide the mapping.
03 Simple Rule‑Based Router
In the most straightforward scenario the router contains hard‑coded rules and does not involve an LLM. Example: if the user asks for a tool, jump to the tool node; otherwise end.
import { StateGraph, END } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
import { BaseMessage, AIMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
// Define state
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
default: () => [],
}),
});
const tools = [new TavilySearchResults({ maxResults: 3 })];
const model = new ChatOpenAI({ model: "gpt-4o" }).bindTools(tools);
async function callAgent(state) {
const response = await model.invoke(state.messages);
return { messages: [response] };
}
async function callTools(state) {
const lastMessage = state.messages[state.messages.length - 1] as AIMessage;
const toolCalls = lastMessage.tool_calls ?? [];
const results = await Promise.all(
toolCalls.map(async (tc) => {
const tool = tools.find((t) => t.name === tc.name);
if (!tool) throw new Error(`Tool ${tc.name} not found`);
const result = await tool.invoke(tc.args);
return { role: "tool" as const, content: String(result), tool_call_id: tc.id };
})
);
return { messages: results };
}
function shouldContinue(state) {
const lastMessage = state.messages[state.messages.length - 1] as AIMessage;
if (lastMessage.tool_calls && lastMessage.tool_calls.length > 0) {
return "tools";
}
return "__end__";
}
const workflow = new StateGraph(AgentState)
.addNode("agent", callAgent)
.addNode("tools", callTools)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue) // router decides "tools" or "__end__"
.addEdge("tools", "agent"); // loop back for another iteration
const app = workflow.compile();The key detail is that .addConditionalEdges("agent", shouldContinue) does not need an explicit edgeMapping because shouldContinue returns the exact node name.
04 Multi‑Branch Routing
In real projects an agent often needs to dispatch to several expert nodes based on user intent (e.g., weather, code lookup, casual chat). The router returns a label that is mapped to a node name.
async function classifyIntent(state) {
const lastMsg = state.messages[state.messages.length - 1];
const schema = z.object({
intent: z.enum(["weather", "code", "chat"]).describe(
"用户意图:weather=查天气,code=查代码文档,chat=随意聊天"
),
});
const structured = llm.withStructuredOutput(schema);
const result = await structured.invoke([
{ role: "system", content: "判断用户意图,只返回 weather/code/chat 三者之一" },
lastMsg,
]);
return { intent: result.intent };
}
function routeByIntent(state) {
const intentMap = { weather: "weather_node", code: "code_node", chat: "chat_node" };
return intentMap[state.intent] ?? "chat_node";
}
const workflow = new StateGraph(RouterState)
.addNode("classify", classifyIntent)
.addNode("weather_node", handleWeather)
.addNode("code_node", handleCode)
.addNode("chat_node", handleChat)
.addEdge("__start__", "classify")
.addConditionalEdges("classify", routeByIntent, {
weather_node: "weather_node",
code_node: "code_node",
chat_node: "chat_node",
})
.addEdge("weather_node", "__end__")
.addEdge("code_node", "__end__")
.addEdge("chat_node", "__end__");
const app = workflow.compile();05 Loop Routing (Agentic Loop)
Conditional edges can create a loop where the agent calls tools, examines the result, and decides whether to continue or finish. This pattern appears in most ReAct agents.
┌─────────────────────────────────────────────────────────┐
│ Agentic Loop │
│ │
│ START → [agent] → router → "tools" → [tools] │
│ ▲ │ │
│ └───────────────────────────────────┘ │
│ │
│ router → "__end__" → END │
└─────────────────────────────────────────────────────────┘The loop is closed by adding .addEdge("tools", "agent"). A safety guard such as a maximum iteration counter prevents infinite loops when the LLM hallucinates or a tool fails.
const SafeLoopState = Annotation.Root({
messages: Annotation<BaseMessage[]>({ reducer: (x, y) => x.concat(y), default: () => [] }),
iterationCount: Annotation<number>({ reducer: (_, y) => y, default: () => 0 }),
});
const MAX_ITERATIONS = 10;
function safeRouter(state) {
if (state.iterationCount >= MAX_ITERATIONS) {
console.warn(`达到最大迭代次数 ${MAX_ITERATIONS},强制结束`);
return "__end__";
}
const lastMessage = state.messages[state.messages.length - 1] as AIMessage;
if (lastMessage.tool_calls && lastMessage.tool_calls.length > 0) {
return "tools";
}
return "__end__";
}
async function callAgentWithCounter(state) {
const response = await model.invoke(state.messages);
return { messages: [response], iterationCount: state.iterationCount + 1 };
}
const workflow = new StateGraph(SafeLoopState)
.addNode("agent", callAgentWithCounter)
.addNode("tools", toolNode)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", safeRouter) // uses the counter
.addEdge("tools", "agent");
const app = workflow.compile();06 LLM as Router: Semantic Dynamic Branching
When the routing logic is complex or requires natural‑language understanding, an LLM can act as the router. The LLM returns a structured decision (next node and a short reason).
const routerLLM = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
const RouterDecision = z.object({
next: z.enum(["researcher", "writer", "tools", "__end__"]),
reason: z.string().describe("路由理由,一句话说明"),
});
async function llmRouter(state) {
const structured = routerLLM.withStructuredOutput(RouterDecision);
const result = await structured.invoke([
{ role: "system", content: `你是一个工作流路由器。根据当前状态决定下一步:
- researcher:需要搜集外部信息
- writer:已有足够信息,可以开始写作
- tools:需要调用工具(计算、查询等)
- __end__:任务已完成` },
{ role: "user", content: `当前消息历史:
${state.messages.slice(-3).map(m => m.content).join("
")}` },
]);
console.log(`路由决策:${result.next},理由:${result.reason}`);
return result.next;
}
const workflow = new StateGraph(AgentState)
.addNode("agent", callAgent)
.addConditionalEdges("agent", llmRouter) // LLM decides the next branch
.addEdge("researcher", researcherNode)
.addEdge("writer", writerNode)
.addEdge("tools", toolsNode)
.addEdge("__end__", "__end__");
const app = workflow.compile();Rule‑based routing is fast and cheap but inflexible; LLM routing is slower and more expensive but can understand semantics. In production they are often mixed: LLM for intent classification, rule‑based edges for critical exit points.
07 Full Real‑World Example: Customer‑Service Agent with Multi‑Round Branching
The article combines all previous concepts into a runnable customer‑service agent that can handle order queries, technical support, and general chat, with loop protection.
import { StateGraph, END, START } from "@langchain/langgraph";
import { Annotation, messagesStateReducer } from "@langchain/langgraph";
import { BaseMessage, AIMessage, HumanMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ToolNode } from "@langchain/langgraph/prebuilt";
// ----- State definition -----
const CustomerServiceState = Annotation.Root({
messages: Annotation<BaseMessage[]>({ reducer: messagesStateReducer, default: () => [] }),
category: Annotation<string>({ reducer: (_, y) => y, default: () => "" }),
resolved: Annotation<boolean>({ reducer: (_, y) => y, default: () => false }),
turnCount: Annotation<number>({ reducer: (_, y) => y, default: () => 0 }),
});
// ----- Tool definition -----
const queryOrderTool = tool(async ({ orderId }) => {
return JSON.stringify({
orderId,
status: "已发货",
estimatedDelivery: "2026-04-28",
carrier: "顺丰",
trackingNo: "SF1234567890",
});
}, {
name: "query_order",
description: "查询订单状态和物流信息",
schema: z.object({ orderId: z.string().describe("订单ID") }),
});
const allTools = [queryOrderTool];
const toolNode = new ToolNode(allTools);
const llm = new ChatOpenAI({ model: "gpt-4o" }).bindTools(allTools);
const classifierLLM = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0 });
// ----- Classification node -----
async function classifyMessage(state) {
const schema = z.object({ category: z.enum(["order", "tech", "general"]) });
const structured = classifierLLM.withStructuredOutput(schema);
const lastMsg = state.messages[state.messages.length - 1];
const result = await structured.invoke([
{ role: "system", content: "分类用户问题:order=订单/物流,tech=技术支持,general=通用问答" },
lastMsg,
]);
return { category: result.category };
}
// ----- Expert nodes -----
async function orderAgent(state) {
const systemPrompt = `你是订单客服专员。帮助用户查询订单状态、处理物流问题。
可以调用 query_order 工具查询实时订单信息。`;
const response = await llm.invoke([ { role: "system", content: systemPrompt }, ...state.messages ]);
return { messages: [response], turnCount: state.turnCount + 1 };
}
async function techAgent(state) {
const systemPrompt = `你是技术支持工程师。帮助用户解决产品使用问题。
回答要简洁、可操作。`;
const response = await llm.invoke([ { role: "system", content: systemPrompt }, ...state.messages ]);
return { messages: [response], resolved: true, turnCount: state.turnCount + 1 };
}
async function generalAgent(state) {
const response = await llm.invoke(state.messages);
return { messages: [response], resolved: true, turnCount: state.turnCount + 1 };
}
// ----- Routing functions -----
function routeByCategory(state) {
const map = { order: "order_agent", tech: "tech_agent", general: "general_agent" };
return map[state.category] ?? "general_agent";
}
function shouldOrderContinue(state) {
if (state.turnCount >= 5) return "__end__";
const last = state.messages[state.messages.length - 1] as AIMessage;
return last.tool_calls?.length ? "tools" : "__end__";
}
// ----- Build the graph -----
const workflow = new StateGraph(CustomerServiceState)
.addNode("classify", classifyMessage)
.addNode("order_agent", orderAgent)
.addNode("tech_agent", techAgent)
.addNode("general_agent", generalAgent)
.addNode("tools", toolNode)
.addEdge(START, "classify")
.addConditionalEdges("classify", routeByCategory)
.addConditionalEdges("order_agent", shouldOrderContinue)
.addEdge("tools", "order_agent")
.addEdge("tech_agent", END)
.addEdge("general_agent", END);
const app = workflow.compile();
// ----- Test run -----
async function main() {
const result = await app.invoke({
messages: [new HumanMessage("我的订单 ORDER-2026-001 现在在哪了?")],
});
console.log("最终回复:", result.messages[result.messages.length - 1].content);
}
main().catch(console.error);The diagram below visualises the overall architecture of the customer‑service agent.
08 Common Pitfalls and Self‑Checklist
Pitfall 1: Router return value does not match any edgeMapping key. The graph validation will fail during compile().
Pitfall 2: Forgetting to handle all possible return values. Every branch must be declared; missing nodes cause compile‑time errors.
Pitfall 3: Loop router without an exit condition. Without a path to __end__ the agent can loop forever.
Pitfall 4: Async router missing await . The router would return a Promise instead of a node name.
Pitfall 5: Adding multiple Conditional Edges from the same source node. Only the last call takes effect; combine all branches into a single router function.
By following the guidelines above you can safely employ Conditional Edge to build flexible, production‑grade LangGraph workflows.
Conclusion
The article dissected LangGraph’s Conditional Edge from definition to implementation, covering the three‑parameter API, rule‑based and LLM‑based routing strategies, loop protection, multi‑branch routing, a complete customer‑service use case, and a checklist of common mistakes.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
James' Growth Diary
I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
