Unlock Powerful AI Agents with LangGraph: Build, Tool, and Memory in 3 Steps

This tutorial walks you through why LangGraph outperforms LangChain by showing how to build a basic chatbot, integrate tool calls, and add both short‑term and long‑term memory, complete with code examples and visualizations for creating flexible, state‑driven AI agents.

Instant Consumer Technology Team
Instant Consumer Technology Team
Instant Consumer Technology Team
Unlock Powerful AI Agents with LangGraph: Build, Tool, and Memory in 3 Steps

When you first hear about large‑model frameworks, you might think tools like LangChain are unnecessary, assuming a few lines of Python can handle everything. The author shares a similar initial skepticism until discovering LangGraph, which simplifies tool integration and agent development.

The official tutorial splits the process into six steps; mastering the first three solves about 80% of agent development challenges.

Build a basic chatbot

Add tools

Add memory

Advanced: Add human‑in‑the‑loop controls

Advanced: Customize state

Advanced: Time travel

LangGraph Basic Concepts

If LangChain is a development "kit" for large models, LangGraph can be seen as a blueprint specifically for building agents, sitting one level above LangChain. The framework treats an agent as a directed acyclic graph (DAG) where each node represents a function and edges define state transitions.

Agent consists of three components: a large language model (LLM), a set of tools, and a prompt. The LLM runs in a loop, selecting a tool to call, providing input, receiving the observation, and using it to decide the next action until a stop condition is met.

LangGraph’s DAG can contain multiple nodes, each representing a function, with directed edges indicating the flow of state.

State : a shared data structure that stores the current graph state (e.g., conversation history, tool results). All nodes can read and update it.

Node : a function that receives the current State as input and returns an updated State. Nodes can perform LLM inference, tool calls, etc.

Edge : a directed connection between nodes that represents state transition.

Defining the DAG completes the agent; LangGraph also provides visualizations of the graph.

Step 1: Build a Basic Chatbot

The core of LangGraph is state management, so we first define the state structure and node functions.

from langgraph.graph import StateGraph, START, END
from typing_extensions import TypedDict
from typing import Annotated, Callable, Any, Dict
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI

main_agent_config = AgentConfig(
    model_name="Doubao-1.5-lite-32k",
    temperature=0.1,
    max_tokens=4000,
)

class State(TypedDict):
    messages: Annotated[list, add_messages]

if __name__ == "__main__":
    graph = StateGraph(State)
    llm = ChatOpenAI(
        model=main_agent_config.model_name,
        temperature=main_agent_config.temperature,
        max_tokens=main_agent_config.max_tokens,
        openai_api_key=main_agent_config.openai_api_key,
        openai_api_base=main_agent_config.openai_base_url,
    )
def chat_node(state: State):
    result = llm.invoke(state["messages"])
    return {"messages": [result]}

graph.add_node("chat", chat_node)
graph.add_edge(START, "chat")
graph.add_edge("chat", END)
app = graph.compile()
result = app.invoke({"messages": [{"role": "user", "content": "你好呀朋友"}]})
print(result.get("messages", [])[-1].content)

This defines a chat node and connects the special START and END nodes, completing the graph. Each LangGraph always includes these start and end nodes.

You can retrieve the graph structure with app.get_graph(), which is useful for debugging.

# Generate a visual PNG of the graph
graph_image = app.get_graph().draw_mermaid_png(output_file_path="agent_graph.png")

Agent Basic Flowchart

Step 2: Add Tools

Adding tool‑calling capability is central to an agent. With LangGraph, you define a dedicated node for tool calls and set up conditional edges.

tools = [Tools.google_search, Tools.code_check]
llm_with_tools = llm.bind_tools(tools)

def chat_node(state: State):
    result = llm_with_tools.invoke(state["messages"])
    return {"messages": [result]}

graph.add_node("chat", chat_node)
graph.add_node("tools", ToolNode(tools))

graph.add_edge(START, "chat")
graph.add_conditional_edges(
    "chat",
    tools_condition,  # Routes to "tools" or "__end__"
    {"tools": "tools", "__end__": "__end__"}
)
graph.add_edge("tools", "chat")
graph.add_edge("chat", END)
app = graph.compile()
result = app.invoke({"messages": [{"role": "user", "content": "你好呀朋友"}]})
print(result.get("messages", [])[-1].content)

The new tools node is triggered when the chat node’s output contains a tool‑call request; otherwise the flow proceeds to END. After tool execution, control returns to the chat node, forming a loop.

Agent and Tool Node Flowchart

Step 3: Add Memory

An agent that remembers conversation history is essential. LangGraph offers short‑term and long‑term memory modes.

Short‑Term Memory

Short‑term memory stores the current dialogue context (user queries and agent responses). It is implemented via a Checkpointer. Example using SqliteSaver:

import sqlite3
from langgraph.checkpoint.sqlite import SqliteSaver

conn = sqlite3.connect("checkpoints.sqlite", check_same_thread=False)
memory = SqliteSaver(conn)
app = graph.compile(checkpointer=memory)

During inference, a thread_id identifies the conversation:

config = {"configurable": {"thread_id": "1"}}
result = app.invoke(
    input={"messages": [{"role": "user", "content": "你好呀朋友"}]},
    config=config
)

Interaction example demonstrates that the agent remembers the user’s identity across turns.

user: 我是代码里程碑,你记住了吗?
assistant: 记住啦,你是代码里程碑。 …

user: 你知道我是谁吗?
assistant: 你是代码里程碑呀。 …

Long‑Term Memory

Long‑term memory persists information across sessions using a store (e.g., InMemoryStore or custom RAG). Example:

embeddings = AIHubMixEmbedding()
store = InMemoryStore(index={"embed": embeddings, "dims": 1536})
namespace = ("users", "memories")
store.put(namespace, "user_123", {"name": "John Smith", "language": "English", "food_preference": "I like pizza"})
store.put(namespace, "user_124", {"food_preference": "I like apple"})

def get_rules(query: str) -> str:
    """Retrieve user rules for answering questions"""
    from langgraph.config import get_store
    return get_store().search(namespace, query=query, limit=1)

tools = [get_rules]
llm_with_tools = llm.bind_tools(tools)
app = graph.compile(store=store)
result = app.invoke(
    input={"messages": [{"role": "user", "content": "do you know my name?"}]},
    config=config
)
print(result.get("messages", [])[-1].content)

The store passed during compilation acts as the long‑term memory backend; you can access it anywhere via get_store(). The built‑in long‑term memory provides basic RAG capabilities, but for more advanced use‑cases you may prefer integrating your own external RAG service.

Conclusion

Flexible flow control : supports conditional branches, loops, and complex workflows.

Simple tool integration : reduces prompt engineering effort for reliable tool calls.

Powerful memory features : built‑in short‑term and long‑term memory simplify state management.

By mastering these three steps, you have the core usage of LangGraph. Next, you can explore advanced features such as human‑in‑the‑loop control, custom state management, and time travel.

All code in this article is open‑source at https://github.com/codemilestones/TinyCodeBase . Feel free to star the repository and follow for updates.

PythonLLMTool IntegrationAgent MemoryLangGraph
Instant Consumer Technology Team
Written by

Instant Consumer Technology Team

Instant Consumer Technology Team

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.