LangGraph Human-in-the-Loop: Let AI Pause for Human Approval at Critical Steps
This article explains why AI agents need human‑in‑the‑loop checkpoints, introduces the interrupt() coroutine that freezes graph execution, shows how to embed it in tools and full graphs, compares three intervention modes, and provides production‑grade tips and common pitfalls.
Why AI Needs "Wait for Approval"
In high‑risk operations such as sending email, deleting files, making payments, or calling external APIs, a single mistake can be irreversible, so a human confirmation step is required. Low‑confidence decisions (ambiguous intent or insufficient information) also need human input, as do compliance‑driven approvals like contract signing or permission changes.
Core Principle of interrupt()
The interrupt() function acts as a coroutine pause point. When invoked, the graph’s execution context is frozen and persisted to a Checkpointer. An interrupt event is returned to the caller, and the graph remains idle until a Command(resume=...) from a human operator resumes execution. It is not an exception; the graph simply waits.
Unlimited pause duration : seconds, hours, or days, depending on business needs.
State persistence : relies on a Checkpointer (in‑memory or database) to store the frozen state.
Exact resume : the Command(resume=value) carries the human decision back to the breakpoint.
Not an error : the graph does not crash; it is merely waiting.
Basic Usage: Using interrupt() Inside a Tool
The most common pattern is to place an interrupt in a tool function that performs a sensitive operation, such as sending an email.
import { interrupt } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
// Simulated sensitive tool: send email
const sendEmailTool = tool(
async ({ to, subject, body }) => {
const humanDecision = await interrupt({
type: "tool_review",
tool: "sendEmail",
args: { to, subject, body },
message: `即将向 ${to} 发送邮件,主题:${subject}。是否批准?`,
});
if (humanDecision.type === "approve") {
return `邮件已发送至 ${to}`;
} else if (humanDecision.type === "edit") {
const edited = humanDecision.args;
return `邮件已按修改后内容发送至 ${edited.to}`;
} else {
return "邮件发送已取消";
}
},
{
name: "sendEmail",
description: "发送邮件(需要人工审批)",
schema: z.object({
to: z.string().describe("收件人"),
subject: z.string().describe("主题"),
body: z.string().describe("正文"),
}),
}
);Full Graph Integration with HITL
Beyond a single tool, the entire graph must be configured with a Checkpointer; otherwise interrupt() cannot persist state.
import { StateGraph, START, END } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage } from "@langchain/core/messages";
import { Annotation, messagesStateReducer } from "@langchain/langgraph";
// Define State
const AgentState = Annotation.Root({
messages: Annotation({
reducer: messagesStateReducer,
default: () => [],
}),
});
const llm = new ChatOpenAI({ model: "gpt-4o" });
const tools = [sendEmailTool];
const llmWithTools = llm.bindTools(tools);
function chatbotNode(state) {
return llmWithTools.invoke(state.messages).then(response => ({
messages: [response],
}));
}
const graph = new StateGraph(AgentState)
.addNode("chatbot", chatbotNode)
.addNode("tools", new ToolNode(tools))
.addEdge(START, "chatbot")
.addConditionalEdges("chatbot", state => {
const lastMsg = state.messages[state.messages.length - 1];
return lastMsg.tool_calls?.length ? "tools" : END;
})
.addEdge("tools", "chatbot");
// ⚠️ Critical: configure a Checkpointer
const checkpointer = new MemorySaver();
const compiledGraph = graph.compile({ checkpointer });Trigger and Resume: Complete Stream Lifecycle
第一次调用 stream() 第二次调用 stream()
───────────────── ─────────────────
用户发送消息 Command(resume=...)
↓ ↓
chatbot 节点 从断点恢复
↓ ↓
tools 节点 tools 继续执行
↓ ↓
interrupt() 触发 工具返回结果
↓ ↓
返回 interrupt 事件 chatbot 生成最终回复
↓ ↓
等待人工输入 stream 正常结束Code demonstration:
import { Command } from "@langchain/langgraph";
const config = { configurable: { thread_id: "user-session-001" } };
// ① First run: triggers interrupt
console.log("=== 第一次执行 ===");
for await (const event of compiledGraph.stream({ messages: [new HumanMessage("帮我给 [email protected] 发一封会议邀请邮件")] }, config)) {
if (event.__interrupt__) {
console.log("⏸️ Graph 暂停,等待人工审核:");
console.log(JSON.stringify(event.__interrupt__[0].value, null, 2));
break;
}
}
// ② Simulate human approval and resume
console.log("
=== 人工批准,恢复执行 ===");
for await (const event of compiledGraph.stream(new Command({ resume: { type: "approve" } }), config)) {
if (event.chatbot) {
const lastMsg = event.chatbot.messages.slice(-1)[0];
console.log("✅ 最终回复:", lastMsg.content);
}
}
// ③ If human rejects, use Command({ resume: { type: "reject", reason: "内容不对,先别发" } })Advanced: Three Human‑Intervention Modes
Different scenarios require different depths of involvement.
模式一:纯审批(Approve/Reject)
─────────────────────────────
AI执行前 → 展示给人 → 批准继续 / 拒绝取消
模式二:审批+修改(Edit)
─────────────────────────────
AI执行前 → 展示给人 → 批准 / 修改参数后继续 / 拒绝
模式三:主动求助(Human Assistance)
─────────────────────────────
AI遇到疑问 → 主动呼叫人类 → 人类提供信息 → AI继续Code structures for the three modes:
// Mode 1: pure approve/reject
const approveResult = await interrupt({
message: "确认要执行此操作?",
action: "delete_record",
target: "user_id_123",
});
if (approveResult !== "approve") {
return { status: "cancelled" };
}
// Mode 2: approve + edit
const reviewResult = await interrupt({
type: "review",
original: { price: 9999, quantity: 10 },
});
const finalParams = reviewResult.type === "edit" ? reviewResult.params : { price: 9999, quantity: 10 };
// Mode 3: need help
const clarification = await interrupt({
type: "need_help",
question: "用户说'最近'是指最近3天还是最近一周?",
context: userMessage,
});
const timeRange = clarification.answer; // human‑provided answerNode‑Level Interrupt (Without a Tool)
Sometimes you want to pause at any arbitrary node, for example after the AI generates a plan.
import { interrupt, Command } from "@langchain/langgraph";
import { StateGraph, START, END, Annotation, messagesStateReducer } from "@langchain/langgraph";
const PlanState = Annotation.Root({
messages: Annotation({ reducer: messagesStateReducer, default: () => [] }),
plan: Annotation<string[]>({ default: () => [] }),
approved: Annotation<boolean>({ default: () => false }),
});
async function planNode(state) {
const planSteps = ["1. 搜索相关资料", "2. 分析数据", "3. 生成报告", "4. 发送给指定邮箱"];
const decision = await interrupt({
type: "plan_review",
plan: planSteps,
message: "AI 已生成执行计划,请确认是否继续?",
});
return { plan: planSteps, approved: decision.type === "approve" };
}
async function executeNode(state) {
if (!state.approved) {
return { messages: [{ role: "assistant", content: "计划已取消。" }] };
}
// actual execution logic …
return { messages: [{ role: "assistant", content: "计划执行完毕。" }] };
}
const planGraph = new StateGraph(PlanState)
.addNode("plan", planNode)
.addNode("execute", executeNode)
.addEdge(START, "plan")
.addEdge("plan", "execute")
.addEdge("execute", END)
.compile({ checkpointer: new MemorySaver() });Production‑Grade Advice: Common Pitfalls and Best Practices
✅ 正确配置 ❌ 常见错误
────────────────────── ──────────────────────
MemorySaver(开发测试) 不配置 checkpointer
PostgresSaver(生产) 忘记 checkpointer → interrupt 无效
禁用并行工具调用 启用并行工具调用
assert tool_calls ≤ 1 多工具同时 interrupt → 状态混乱
给每个 thread 唯一 ID 所有请求共用一个 thread_id
configurable.thread_id 状态污染,恢复出错
interrupt payload 清晰 payload 塞了大量原始数据
只传审核必要信息 前端渲染困难,传输浪费In production, replace MemorySaver with PostgresSaver:
import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";
const checkpointer = PostgresSaver.fromConnString(process.env.DATABASE_URL!);
await checkpointer.setup(); // create tables
const productionGraph = graph.compile({ checkpointer });Key lessons:
Lesson 1: A Checkpointer is mandatory; without it interrupt() throws an error.
Lesson 2: Disable parallel tool calls; otherwise non‑interrupted tools may be re‑executed on resume.
Lesson 3: thread_id must be unique and persistent (e.g., business ID + conversation ID) so that a paused interrupt can be recovered after a page refresh.
Conclusion
interrupt() is a freeze point : the graph fully pauses, state is persisted to a Checkpointer, and it waits for a Command(resume=...) to continue.
Three intervention modes : pure approve/reject, approve + edit, and human assistance, chosen based on risk level.
Checkpointer is foundational : use MemorySaver for development, switch to PostgresSaver for production.
Disable parallel tool calls : ensure only one tool runs at a time to avoid duplicate executions.
Persistent unique thread_id : guarantees that a paused state can be located later.
Not an exception, but a protocol : treat interrupt() as a designed interaction between the graph and a human, not as error handling.
Next, we will dive into LangGraph checkpoint mechanisms to understand how conversation memory and resume‑from‑breakpoint are implemented.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
James' Growth Diary
I am James, focusing on AI Agent learning and growth. I continuously update two series: “AI Agent Mastery Path,” which systematically outlines core theories and practices of agents, and “Claude Code Design Philosophy,” which deeply analyzes the design thinking behind top AI tools. Helping you build a solid foundation in the AI era.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
