Building a ReACT Agent with LangGraph’s Low‑Level API (Part 8)

This tutorial shows how to use LangGraph’s low‑level API to create a multi‑turn chatbot, define structured state, add model and tool nodes, implement conditional branching for tool calls, and reproduce a ReACT graph agent with a weather‑assistant example.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Building a ReACT Agent with LangGraph’s Low‑Level API (Part 8)

1. Building a Multi‑turn Chatbot with LangGraph

First, define the graph's structured state using a TypedDict with an

Annotated
messages

field and the add_messages helper to merge message lists safely.

from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

Next, add a large‑model node named chatbot and connect it to the START edge. The model is initialized with init_chat_model (e.g., deepseek-chat).

from langchain.chat_models import init_chat_model
from langgraph.constants import START

model = init_chat_model(
    model="deepseek-chat",
    model_provider="deepseek",
    api_key='your_deepseek_api_key'
)

def chatbot(state: State):
    return {"messages": [model.invoke(state["messages"]) ]}

graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")

Compile the graph and invoke it with an initial message list to obtain the final state.

graph = graph_builder.compile()
final_state = graph.invoke({"messages": ["你好,我叫陈明,好久不见。"]})
print(final_state['messages'])

For multi‑turn dialogue, extend the message list similarly to LangChain’s approach.

from langchain_core.messages import AIMessage, HumanMessage
messages_list = [
    HumanMessage(content="你好,我叫大模型真好玩,好久不见。"),
    AIMessage(content="你好呀!我是苍老师,是一名女演员。很高兴认识你!"),
    HumanMessage(content="请问,你还记得我叫什么名字么?"),
]
final_state = graph.invoke({"messages": messages_list})
print(final_state['messages'])

2. Reproducing the create_react_agent Graph with Low‑Level API

Define the same state structure as before, this time named AgentState.

from typing import Annotated, TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages

class AgentState(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(AgentState)

Prepare the model (again deepseek-chat) and a weather‑query tool that calls the Seniverse API.

from langchain.chat_models import init_chat_model
from langchain_core.tools import tool
from pydantic import BaseModel, Field
import requests

class WeatherQuery(BaseModel):
    loc: str = Field(description="城市名称")

@tool(args_schema=WeatherQuery)
def get_weather(loc):
    """Query real‑time weather from Seniverse API."""
    url = "https://api.seniverse.com/v3/weather/now.json"
    params = {"key": "your_seniverse_api_key", "location": loc, "language": "zh-Hans", "unit": "c"}
    response = requests.get(url, params=params)
    return response.json()['results'][0]['now']

model = init_chat_model(
    model='deepseek-chat',
    model_provider='deepseek',
    api_key='your_deepseek_api_key'
)
tools = [get_weather]
model = model.bind_tools(tools)

Define the two core nodes: call_model (invokes the LLM with a system prompt) and tool_node (executes the bound tools). Also define should_continue to decide whether to end or call a tool based on the presence of tool_calls in the last message.

from langchain_core.messages import SystemMessage
from langgraph.prebuilt import ToolNode

def call_model(state: AgentState):
    system_prompt = SystemMessage("你是一个AI助手,可以依据用户提问产生回答,你还具备调用天气函数的能力")
    response = model.invoke([system_prompt] + state["messages"])
    return {"messages": [response]}

tool_node = ToolNode(tools)

def should_continue(state: AgentState):
    last_message = state["messages"][-1]
    if not last_message.tool_calls:
        return "end"
    else:
        return "continue"

Assemble the graph: add the agent and tools nodes, connect edges, and use graph.add_conditional_edges with the should_continue function to create the ReACT decision loop.

from langgraph.graph import StateGraph, START, END

graph = StateGraph(AgentState)

graph.add_node("agent", call_model)
graph.add_node("tools", tool_node)

graph.add_edge(START, "agent")
graph.add_edge("tools", "agent")

graph.add_conditional_edges(
    "agent",
    should_continue,
    {"continue": "tools", "end": END},
)

graph = graph.compile()

Run the ReACT agent with a weather query. The graph calls the weather tool and returns the result.

final_state = graph.invoke({"messages": ["请问上海天气如何?"]})
print(final_state['messages'])

3. Summary

The article demonstrates how to construct a multi‑turn chatbot and a ReACT‑style agent using LangGraph’s low‑level API. By defining a typed state, adding model and tool nodes, and wiring conditional edges, developers can reproduce the functionality of the high‑level create_react_agent helper. The weather‑assistant example illustrates tool calling, decision branching, and final state extraction, providing a concrete reference for building flexible AI agents with LangGraph.

PythonAI agentsTool Callingmulti‑turn dialogueLangGraphlow‑level API
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.