Build a Graph‑Based LLM Agent with LangGraph: Step‑by‑Step Tutorial

This article introduces LangGraph, a Python library for creating stateful, multi‑agent LLM workflows, explains its loop, persistence, and human‑in‑the‑loop features, shows how to install it, and provides a complete code example that builds, runs, and reuses a searchable AI agent with thread‑level state saving.

JavaEdge
JavaEdge
JavaEdge
Build a Graph‑Based LLM Agent with LangGraph: Step‑by‑Step Tutorial

Overview

LangGraph is a Python library for constructing large‑language‑model (LLM) applications that require state, loops, and multiple participants. It gives fine‑grained control over workflow execution and supports persistent state, making it suitable for reliable agents and human‑in‑the‑loop scenarios. The design draws inspiration from Pregel and Apache Beam, and its public API resembles NetworkX. Although created by the same team behind LangChain, LangGraph can be used independently.

Key Features

Loops and branching : Implement iterative processes and conditional logic directly in the graph.

Persistence : Automatically save the graph state after each step; you can pause and resume execution, enabling error recovery, human‑in‑the‑loop workflows, and time‑travel debugging.

Human‑in‑the‑loop : Interrupt execution to approve or edit the next action of an agent.

Streaming support : Stream output (including token streams) from each node as it is generated.

LangChain integration : Seamlessly works with LangChain and LangSmith, though they are not required.

Installation

pip install -U langgraph

Example: Simple Search Agent

The example demonstrates a stateful agent that uses a search tool to answer weather queries. It highlights how state is passed between nodes, how conditional edges decide the next step, and how persistence enables context reuse across invocations.

pip install langchain-anthropic
export ANTHROPIC_API_KEY=sk-...
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=lsv2_sk_...

from typing import Annotated, Literal, TypedDict
from langchain_core.messages import HumanMessage
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

# Define a simple search tool
@tool
def search(query: str):
    """Call to browse the web (placeholder implementation)."""
    if "sf" in query.lower() or "san francisco" in query.lower():
        return "It is 60°F and foggy now."
    return "It is 90°F and sunny now."

tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model="claude-3-5-sonnet-20240620", temperature=0).bind_tools(tools)

# Decide whether to continue with tools or finish
def should_continue(state: MessagesState) -> Literal["tools", END]:
    last_message = state["messages"][-1]
    if last_message.tool_calls:
        return "tools"
    return END

# Call the LLM and append its response to the message list
def call_model(state: MessagesState):
    messages = state["messages"]
    response = model.invoke(messages)
    return {"messages": [response]}

# Build the graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")

# Enable persistence across runs
checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)

# First invocation – ask about San Francisco weather
final_state = app.invoke(
    {"messages": [HumanMessage(content="sf 的天气如何")]},
    config={"configurable": {"thread_id": 42}}
)
print(final_state["messages"][-1].content)

# Second invocation with the same thread_id retains context
final_state = app.invoke(
    {"messages": [HumanMessage(content="那纽约呢")]},
    config={"configurable": {"thread_id": 42}}
)
print(final_state["messages"][-1].content)

The first call asks for San Francisco weather; the agent uses the search tool, returns a foggy 60°F description, and stores the interaction in the persistent state. The second call, using the same thread_id, retrieves the stored context and answers a New York weather query, demonstrating how LangGraph preserves conversation history across separate invocations.

Conclusion

LangGraph provides a low‑level yet expressive framework for building controllable, stateful LLM agents with looping, persistence, and human‑in‑the‑loop capabilities. By combining simple Python functions with a graph‑based execution model, developers can create sophisticated multi‑agent workflows that are easy to debug, extend, and integrate with existing LangChain tooling.

PythonAILLMLangChainLangGraphStatefulWorkflow
JavaEdge
Written by

JavaEdge

First‑line development experience at multiple leading tech firms; now a software architect at a Shanghai state‑owned enterprise and founder of Programming Yanxuan. Nearly 300k followers online; expertise in distributed system design, AIGC application development, and quantitative finance investing.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.