Mastering LangGraph: Build Stateful, Looping LLM Agents with Python
This tutorial walks through the limitations of linear LangChain workflows, introduces LangGraph’s state‑node‑edge architecture, and provides step‑by‑step code examples—including a Hello‑World tool, conditional branching, multi‑turn conversation handling, and graph visualization—so readers can construct robust, persistent LLM agents.
LangChain agents often require retrying tool calls, persisting state across nodes, switching LLM models, and resuming after interruptions. LangGraph provides a low‑level orchestration framework that adds loops, conditional branching, and persistent state to LangChain agents.
Core Concepts
State : Global context storing data generated during execution (e.g., task status, results).
Node : A step in the agent pipeline, such as a tool call, LLM invocation, or custom function.
Edge : Logical link determining the next node; can be unconditional or conditional.
Key Features
Loops and branches for iterative and conditional processing.
Automatic persistence of each node’s output to State, enabling recovery after crashes.
Human‑in‑the‑loop control to pause or skip nodes.
Native streaming output support.
Seamless integration with LangChain and LangSimit.
Installation
# Install LangGraph, LangChain, and the Ollama model dependencies
pip install -U langgraph langchain langchain_ollamaHello‑World Example
Creates a simple date‑retrieval tool, binds it to an Ollama LLM, and builds a graph that routes between an LLM node and a tool node.
# Create a date‑retrieval tool
from datetime import datetime
from langchain_core.tools import tool
@tool
def get_current_day():
"""Get today’s date"""
return datetime.now().strftime("%Y-%m-%d")
tools = [get_current_day]
# Build the tool node
from langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools)
# Bind tools to the LLM
from langchain_ollama import ChatOllama
llm = ChatOllama(base_url="http://localhost:11434", model="qwen3:32b").bind_tools(tools)
# Define the LLM node
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, MessagesState, END
def call_llm(state: MessagesState):
messages = state["messages"]
response = llm.invoke(messages)
return {"messages": response}
# Build the workflow
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_llm)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
# Conditional edge: if the LLM requests a tool, go to the tool node; otherwise finish
from typing import Literal
def should_continue(state: MessagesState) -> Literal["tools", END]:
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return END
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")
# Persistence checkpoint (in‑memory; can be swapped for Redis/MongoDB)
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
# Invoke the graph with a user query
final_state = app.invoke({"messages": [HumanMessage(content="今天几号")]})
result = final_state["messages"][-1].content
print(result) # e.g., 2024-04-21Multi‑Turn Conversation
By supplying a configurable thread_id, the same graph retains conversation history across turns.
# First turn – ask for the date
final_state = app.invoke({"messages": [HumanMessage(content="今天几号")]},
config={"configurable": {"thread_id": 42}})
print(final_state["messages"][-1].content)
# Second turn – ask a follow‑up referencing the previous turn
final_state = app.invoke({"messages": [HumanMessage(content="我刚刚问的哪天")]},
config={"configurable": {"thread_id": 42}})
print(final_state["messages"][-1].content)Graph Visualization
The compiled graph can be exported as a Mermaid PNG for inspection.
# Save the graph as a PNG
graph_png = app.get_graph().draw_mermaid_png()
with open('langgraph1.png', 'wb') as f:
f.write(graph_png)Execution Flow Diagram (textual description)
__start__: initialization node (StateGraph). agent: primary LLM node, set as the entry point. tools: tool‑execution node.
Solid edge: direct transition (e.g., workflow.add_edge("tools", "agent")).
Dotted edge: conditional transition defined by should_continue (e.g., workflow.add_conditional_edges("agent", should_continue)). _end: termination node.
Summary of Benefits
LangGraph abstracts LangChain into a higher‑level framework supporting loops, branches, and multi‑turn stateful dialogues.
The three core abstractions—State, Node, Edge—provide a clear mental model for building complex agent pipelines.
The Hello‑World example demonstrates reduced boilerplate compared to raw LangChain while adding persistent state handling.
References
LangGraph official documentation: https://langchain-ai.github.io/langgraph/
DeepSeek‑R1 model (used for content generation): https://chat.deepseek.com/
Qborfy AI
A knowledge base that logs daily experiences and learning journeys, sharing them with you to grow together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
