How to Add Tools to a LangGraph AI Agent for Real‑World Tasks

This tutorial walks through adding custom, pre‑built, and server‑side tools to a LangGraph AI agent, demonstrates a ReAct workflow, implements conditional edges for web search, enforces structured output for intelligent shutdown, and shows how to monitor token usage with callbacks, all with runnable Python code.

Data STUDIO
Data STUDIO
Data STUDIO
How to Add Tools to a LangGraph AI Agent for Real‑World Tasks

1. What Is an LLM Tool?

An LLM tool is any function you want the base model to execute. Because the model cannot read the function code, you must provide a clear description of the function’s purpose, inputs, and return value.

Think of a plain model as a brain in a jar—it knows language and can reason, but it cannot interact with the outside world. Adding tools gives the brain a body, enabling web search, file handling, database queries, code execution, and any other capability the function permits.

2. Create Your First Tool

Below is a simple weather‑fetching tool that returns a fixed string. The function is wrapped with the @tool decorator and a docstring that becomes part of the tool description.

def get_weather(city: str) -> str:
    return f"It's rainy in {city}."

Full definition with decorator and documentation:

from langchain.tools import tool
from dotenv import load_dotenv

load_dotenv()  # Load environment variables such as OPENAI_API_KEY

@tool(parse_docstring=True)
def get_weather(city: str) -> str:
    """Return the weather description for the specified city.

    Args:
        city (str): City to query.

    Returns:
        str: Weather description.
    """
    return f"It's rainy in {city}."

Quick Demo: Using a ReAct Agent

The ReAct agent is a popular predefined workflow that plans, reflects, and uses available tools. We create an in‑memory checkpoint for memory and bind the weather tool to the agent.

from langchain.agents import create_agent
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver

# Create a checkpoint (memory) store
checkpointer = InMemorySaver()
config = {"configurable": {"thread_id": "1"}}

# Build an agent with the weather tool
agent = create_agent(
    model="openai:gpt-4o",
    tools=[get_weather],
    checkpointer=checkpointer
)

while True:
    query = input("query: ")
    new_state = agent.invoke({"messages": [HumanMessage(query)]}, config)
    answer = new_state["messages"][-1].content
    print("answer:", answer)

# Example output:
# query: hello
# answer: Hello! How can I assist you today?
# query: what's the weather like in Cracow?
# answer: The weather in Cracow is currently rainy. Is there anything else you'd like to know?
Note: In older LangChain versions the function is named create_react_agent .

3. Add Web Search to the LangGraph Agent

Instead of writing a search tool from scratch, we import the built‑in DuckDuckGoSearchResults tool and bind it to the model.

from langchain_community.tools import DuckDuckGoSearchResults

model = init_chat_model("openai:gpt-4o")
model_with_search = model.bind_tools([DuckDuckGoSearchResults()])  # Key: bind the tool!

We then define ask_llm to use the tool‑enabled model:

def ask_llm(state: State) -> State:
    user_query = input("query: ")
    user_message = HumanMessage(user_query)
    answer_message: AIMessage = model_with_search.invoke(
        state["messages"] + [user_message]
    )
    return {"messages": [user_message, answer_message]}

Key Component: ToolNode

When the model returns a tool call, ToolNode executes it.

from langgraph.prebuilt import ToolNode

graph.add_node("web_search", ToolNode(tools=[DuckDuckGoSearchResults()]))

Conditional Edge: Decide When to Search

Not every query needs a tool. We use tools_condition to inspect the model’s response. It returns "tools" when a tool call is required, otherwise "END".

def show_answer(state: State) -> State:
    print("answer:", state["messages"][-1].content)
    return {"iteration": state["iteration"] + 1}

def sum_up_search(state: State) -> State:
    answer_message: AIMessage = model.invoke(state["messages"])
    return {"messages": [answer_message]}

graph.add_edge(START, "ask_llm")
graph.add_conditional_edges(
    "ask_llm",
    tools_condition,
    {
        "tools": "web_search",
        END: "show_answer"
    }
)
graph.add_edge("web_search", "sum_up_search")
graph.add_edge("sum_up_search", "show_answer")

graph.add_conditional_edges(
    "show_answer",
    lambda state: state["iteration"] < ITERATION_LIMIT,
    {True: "ask_llm", False: END}
)

Full workflow steps:

User asks a question → LLM decides whether a tool is needed.

If a tool is needed → execute tool → LLM summarizes result → show answer.

If no tool is needed → show answer directly.

Check iteration limit → continue or end.

4. Intelligent Shutdown: Let the Agent Understand “Goodbye”

Fixed iteration counts lead to poor UX. We add a smart‑shutdown node that asks the LLM to decide whether the user wants to end the conversation.

We first define a structured output model with pydantic:

from pydantic import BaseModel, Field
from typing import Literal

class Decision(BaseModel):
    decision: Literal["yes", "no"] = Field(
        description="Whether the user explicitly wants to end the conversation (yes/no)"
    )

Then create a model that forces this format:

model_decision = model.with_structured_output(Decision)

def end_condition(state: State) -> Literal["yes", "no"]:
    decision: Decision = model_decision.invoke(
        state["messages"] + [SystemMessage("Does the user want to end the conversation?")]
    )
    return decision.decision

Integrate Smart Shutdown into the Workflow

We add a virtual node should_end and conditional edges after the answer is shown.

def should_end(_: State) -> State:
    return {}  # Virtual node, no state change

graph.add_node("should_end", should_end)

graph.add_conditional_edges(
    "show_answer",
    lambda state: state["iteration"] < ITERATION_LIMIT,
    {True: "should_end", False: END}
)

graph.add_conditional_edges(
    "should_end",
    end_condition,
    {"yes": END, "no": "ask_llm"}
)

5. Monitoring and Callbacks: Track Token Consumption

Token usage can be inspected directly from AIMessage:

answer: AIMessage = model.invoke({"messages": [HumanMessage("hi")]})
print(answer.usage_metadata)  # e.g., {'input_tokens': 15, 'output_tokens': 8, 'total_tokens': 23}

If structured output is used, the usage_metadata field is not present, so we attach a callback handler:

from langchain_core.callbacks import UsageMetadataCallbackHandler

callback = UsageMetadataCallbackHandler()
answer = model_decision.invoke(
    state["messages"] + [SystemMessage("Does the user want to end the conversation?")],
    config={"callbacks": [callback]}
)
print(callback.usage_metadata)

Exercise: Log the total input and output tokens after every model call, regardless of whether structured output is used.

Conclusion

By following this guide you have learned how to:

Create three kinds of tools – custom, pre‑built, and server‑side.

Build a conditional workflow that decides when to invoke a tool.

Enforce structured output so the LLM returns a predictable format.

Implement intelligent dialog management that detects a user’s intent to end the conversation.

Monitor token consumption using built‑in metadata or a callback handler.

These capabilities turn a theoretical LangGraph agent into a practical assistant that can search the web, query data, manage conversation flow, and keep track of resource usage.

References

LangChain integrated tools: https://docs.langchain.com/oss/python/integrations/tools

ReAct agent: https://www.ibm.com/think/topics/react-agent

Checkpoint documentation: https://docs.langchain.com/oss/javascript/langgraph/persistence#checkpoints

LangChain official tool docs: https://docs.langchain.com/oss/python/integrations/tools

ReAct paper: https://arxiv.org/abs/2210.03629

Pydantic validation: https://docs.pydantic.dev/latest/

ReAct agent workflow
ReAct agent workflow
Agent workflow with web search
Agent workflow with web search
Agent workflow with intelligent shutdown
Agent workflow with intelligent shutdown
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PythonReActtool integrationAI AgentLangGraphStructured OutputToken Monitoring
Data STUDIO
Written by

Data STUDIO

Click to receive the "Python Study Handbook"; reply "benefit" in the chat to get it. Data STUDIO focuses on original data science articles, centered on Python, covering machine learning, data analysis, visualization, MySQL and other practical knowledge and project case studies.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.