How to Add Human‑in‑the‑Loop Interrupts to LangGraph Agents for Safe, Controllable AI Workflows
This guide explains the concept of human‑in‑the‑loop (HITL) interruptions in LangGraph, outlines the core mechanisms such as persistent state and dynamic/static interrupts, and provides detailed Python examples for four classic patterns—approval/rejection, state editing, tool‑call review, and input validation—plus advanced topics like parallel interrupts and MCP‑based tool integration.
Background
LLM‑driven autonomous agents can make errors that are risky in high‑stakes tasks such as booking, paid API calls, or database updates. Human‑in‑the‑Loop (HITL) mechanisms let a human pause execution at critical points, review or edit the model output, and then resume.
LangGraph HITL Core Concepts
Persistent Execution State : After each node, LangGraph saves a checkpoint via a Checkpointer, enabling later resumption without loss of context.
Interrupt Mechanism : Supports dynamic interrupts (triggered conditionally during node execution) and static interrupts (predefined breakpoints before or after nodes).
Integration Points : Call interrupt() anywhere in a node to pause and expose data for human review.
Four Classic HITL Patterns
1. Approve or Reject
Pause before a high‑risk operation, ask the human to approve or reject, and route the graph with Command(goto="approved_path") or Command(goto="rejected_path").
def human_approval(state: State) -> Command[Literal["approved_path", "rejected_path"]]:
decision = interrupt({"question": "Approve?", "llm_output": state["llm_output"]})
if decision == "approve":
return Command(goto="approved_path", update={"decision": "approved"})
else:
return Command(goto="rejected_path", update={"decision": "rejected"})2. Edit Graph State
After generating a draft, pause for human editing; the edited result replaces the original state.
def human_review_edit(state: State) -> State:
result = interrupt({"task": "Edit summary", "generated_summary": state["summary"]})
return {"summary": result["edited_summary"]}3. Review Tool Calls
Before invoking an external tool, pause and let the human accept, edit arguments, or provide a custom response.
if response["type"] == "accept":
tool_response = tool.invoke(tool_input, config)
elif response["type"] == "edit":
tool_input = response["args"]["args"]
tool_response = tool.invoke(tool_input, config)
elif response["type"] == "response":
tool_response = response["args"]4. Validate Human Input
Loop until the human supplies a valid input (e.g., a non‑negative integer age).
while True:
user_input = interrupt(prompt)
try:
age = int(user_input)
if age < 0:
raise ValueError("Age must be non‑negative.")
break
except (ValueError, TypeError):
prompt = f"'{user_input}' is not valid. Please enter a non‑negative integer."
return {"age": age}Full Workflow Example
Configure a Checkpointer (e.g., InMemorySaver()).
Define nodes that call interrupt() where human review is needed.
Compile the graph with graph_builder.compile(checkpointer=checkpointer).
Run the graph with invoke or stream. The result contains an __interrupt__ payload. Resume using Command(resume=...).
from langgraph.checkpoint.memory import InMemorySaver
checkpointer = InMemorySaver()
graph = graph_builder.compile(checkpointer=checkpointer)
config = {"configurable": {"thread_id": "some-unique-id"}}
result = graph.invoke({"summary": "Initial draft..."}, config=config)
print(result['__interrupt__'])
# After human provides edited summary
final = graph.invoke(Command(resume={"edited_summary": "Edited text."}), config=config)Advanced Topics
Parallel Interrupts : Retrieve all pending interrupts with graph.get_state(config).interrupts and resume them in a single Command(resume=resume_map).
Sub‑graph Interrupts : Interrupts inside sub‑graphs propagate to the parent; resumption restarts from the appropriate node in both.
Static Interrupts for Debugging : Use interrupt_before and interrupt_after to set breakpoints during compilation (useful for debugging, not production).
Integrating Real‑World Tools via MCP
A generic add_human_in_the_loop decorator wraps any synchronous or asynchronous tool with HITL behavior.
Create a MultiServerMCPClient with the SSE URL and authentication token.
Fetch tool specifications via await search_client.get_tools().
Wrap each tool with add_human_in_the_loop to inject HITL.
Build a LangGraph REACT agent with the wrapped tools and a checkpointer.
Run the agent asynchronously using agent.astream(...) and resume with Command(resume=[{"type": "accept"}]).
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
checkpointer = InMemorySaver()
wrapped_tools = [add_human_in_the_loop(t) for t in search_tools]
agent = create_react_agent(model=model, tools=wrapped_tools, checkpointer=checkpointer, name="search_assistant")Important Note : Resuming a node re‑executes the entire node from its start. Any side‑effects placed before interrupt() will run again, so place external calls after the interrupt or in a separate downstream node.
References
LangGraph Human‑in‑the‑Loop Overview (https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/)
Tencent Technical Engineering
Official account of Tencent Technology. A platform for publishing and analyzing Tencent's technological innovations and cutting-edge developments.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
