Build a Weather‑Query ReAct Agent with LangGraph: Step‑by‑Step Guide

This article walks through constructing a stateful ReAct‑style LLM agent using LangGraph, detailing the core components—State, Nodes, Edges—defining a weather‑lookup tool with Open‑Meteo, configuring the graph’s nodes and conditional edges, and executing the workflow with streaming to observe each step in real time.

BirdNest Tech Talk
BirdNest Tech Talk
BirdNest Tech Talk
Build a Weather‑Query ReAct Agent with LangGraph: Step‑by‑Step Guide

Overview of LangGraph and ReAct

LangGraph is a framework for building stateful LLM applications, making it especially suitable for constructing ReAct (Reasoning and Acting) agents that interleave model reasoning with tool execution.

Key Components

State

: a shared data structure (usually a TypedDict or Pydantic BaseModel) that holds the current snapshot of the workflow. Nodes: functions that receive the current State, perform computation or side‑effects (e.g., LLM calls or tool calls), and return an updated state. Edges: conditional logic that decides which Node runs next based on the current State.

Agent State Definition

class AgentState(TypedDict):
    """The state of the agent."""
    messages: Annotated[Sequence[BaseMessage], add_messages]
    number_of_steps: int

The messages field stores the conversation history, while number_of_steps counts iterations.

Weather Query Tool

A tool is defined with the @tool decorator. It uses the Open‑Meteo API to fetch hourly temperature for a given city and date. The decorator parameters are: @tool: converts a plain Python function into a LangChain‑compatible tool. "get_weather_forecast": the name the LLM will call. args_schema=SearchInput: validates input arguments using a Pydantic model. return_direct=True: the tool’s raw result is returned directly to the user without further LLM processing.

@tool("get_weather_forecast", args_schema=SearchInput, return_direct=True)
def get_weather_forecast(location: str, date: str):
    latitude = None
    longitude = None
    # Try cached coordinates first
    location_lower = location.lower().strip()
    if location_lower in CITY_COORDINATES:
        latitude, longitude = CITY_COORDINATES[location_lower]
    else:
        try:
            geo_location = geolocator.geocode(location)
            if geo_location:
                latitude = geo_location.latitude
                longitude = geo_location.longitude
        except Exception as geo_error:
            print(f"Geocoding failed: {geo_error}")
    if latitude is not None and longitude is not None:
        try:
            response = requests.get(
                f"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&hourly=temperature_2m&start_date={date}&end_date={date}"
            )
            data = response.json()
            return {time: temp for time, temp in zip(data["hourly"]["time"], data["hourly"]["temperature_2m"]) }
        except Exception as e:
            return {"error": str(e)}
    else:
        return {"error": f"Could not find coordinates for {location}. Please try a major city name."}

Node Functions

call_tool(state: AgentState)

: iterates over tool calls in the last message, invokes the corresponding tool, wraps the result in a ToolMessage, and returns the new messages list. call_model(state: AgentState, config: RunnableConfig): sends the current messages to the LLM, receives the response, and returns it as a new message.

def call_tool(state: AgentState):
    outputs = []
    for tool_call in state["messages"][-1].tool_calls:
        tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
        outputs.append(
            ToolMessage(
                content=tool_result,
                name=tool_call["name"],
                tool_call_id=tool_call["id"],
            )
        )
    return {"messages": outputs}

def call_model(state: AgentState, config: RunnableConfig):
    response = model.invoke(state["messages"], config)
    return {"messages": [response]}

Conditional Edge

The edge decides whether the workflow should continue to the tool node or finish.

def should_continue(state: AgentState):
    messages = state["messages"]
    # If the last message does not contain a tool call, end the graph
    if not messages[-1].tool_calls:
        return "end"
    # Otherwise keep looping
    return "continue"

Graph Construction

def main():
    # Create a new graph based on the AgentState definition
    workflow = StateGraph(AgentState)
    # 1. Add nodes
    workflow.add_node("llm", call_model)
    workflow.add_node("tools", call_tool)
    # 2. Set entry point
    workflow.set_entry_point("llm")
    # 3. Add conditional edge after the LLM node
    workflow.add_conditional_edges(
        "llm",
        should_continue,
        {
            "continue": "tools",  # go to tool node
            "end": END,            # finish the graph
        },
    )
    # 4. After the tool node, always go back to the LLM node
    workflow.add_edge("tools", "llm")
    # Compile the graph
    graph = workflow.compile()
    # ... (execution logic follows)

Running the Agent

Initialize the input dictionary with a user question, then iterate over graph.stream using stream_mode="values" to see each intermediate state.

inputs = {"messages": [("user", f"What is the weather in Beijing on {datetime.today()}?")]}
for state in graph.stream(inputs, stream_mode="values"):
    last_message = state["messages"][-1]
    last_message.pretty_print()

if __name__ == "__main__":
    main()

Streaming Execution Details

graph.stream

is a core LangGraph method that yields the workflow state after each node finishes, allowing real‑time observation. The stream_mode can be: "values": returns the full state after each step. "updates": returns only the changed parts of the state. "debug": returns additional debugging information.

In the weather‑query example the execution proceeds as:

User asks for the weather in a city.

The LLM node decides to call the weather tool and returns a continue edge.

The tool node invokes get_weather_forecast and returns the raw forecast.

The LLM node consumes the tool result and produces the final answer, after which the end edge terminates the graph.

References

ReAct: Synergizing Reasoning and Acting in Language Models (2023) – https://arxiv.org/abs/2210.03629 create_react_agent – https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent

API key acquisition – https://console.bce.baidu.com/iam/#/iam/apikey/list

Reducer concept – https://langchain-ai.github.io/langgraph/concepts/low_level/#reducers

ToolNode documentation – https://langchain-ai.github.io/langgraph/how-tos/tool-calling/

Gemini‑API LangGraph example – https://ai.google.dev/gemini-api/docs/langgraph-example

PythonReActworkflowTool CallingLLM agentsLangGraph
BirdNest Tech Talk
Written by

BirdNest Tech Talk

Author of the rpcx microservice framework, original book author, and chair of Baidu's Go CMC committee.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.