Quickly Build a LangChain Agent Using the Agent API (Part 6)

This tutorial walks through using LangChain's Agent API to create AI agents with tool calling, demonstrating a weather‑assistant example, parallel and sequential tool calls, and integration of the Tavily search tool, all with concise Python code and step‑by‑step explanations.

Fun with Large Models
Fun with Large Models
Fun with Large Models
Quickly Build a LangChain Agent Using the Agent API (Part 6)

Why Switch from Chains to Agents

Traditional Chain workflows become inflexible when user queries do not match the predefined tool, e.g., asking about decision trees instead of weather. Agents, defined by LangChain as "large‑model‑planned compositions of chains to satisfy user needs," address this by enabling automatic tool planning, chaining, and parallel execution.

Quick Setup of a Weather‑Assistant Agent

1. Import dependencies and define a weather‑query function using the @tool decorator.

import requests
from langchain.agents import create_tool_calling_agent, tool, AgentExecutor
from langchain.chat_models import init_chat_model
from langchain_core.prompts import ChatPromptTemplate

@tool
def get_weather(loc):
    """Query real‑time weather from Seniverse API.
    :param loc: city name string
    :return: parsed JSON with weather details"""
    url = "https://api.seniverse.com/v3/weather/now.json"
    params = {"key": "YOUR_API_KEY", "location": loc, "language": "zh-Hans", "unit": "c"}
    response = requests.get(url, params=params)
    return response.json()['results'][0]['now']

2. Build a prompt template that tells the model it is a weather assistant and must be able to write results to a file.

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a weather assistant. Provide weather info and can write results to a file."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

3. Initialise the DeepSeek‑V3 model (or any compatible model) and create the agent with create_tool_calling_agent.

model = init_chat_model(model='deepseek-chat', model_provider='deepseek', api_key='YOUR_DEEPSEEK_KEY')
agent = create_tool_calling_agent(model, [get_weather], prompt)

4. Run the agent via AgentExecutor and observe the automatic chain creation and invocation.

agent_executor = AgentExecutor(agent=agent, tools=[get_weather], verbose=True)
response = agent_executor.invoke({"input": "请问今天北京天气怎么样?"})
print(response)

The output shows a generated chain and the result in {input: ..., output: ...} format.

Parallel Tool Calls

When the user asks for weather in multiple cities, the agent can invoke tools in parallel.

response = agent_executor.invoke({"input": "请问今天北京和杭州的天气怎么样,哪个城市更热?"})
print(response)

The result contains separate calls to get_weather for both cities, demonstrating LangChain's parallel execution flow.

Sequential Calls with File Writing

Adding a file‑write tool enables the agent to fetch weather data and then store it locally.

@tool
def write_file(content):
    """Write the given content to a local file."""
    with open('res.txt', 'w', encoding='utf-8') as f:
        f.write(content)
    return "已成功写入本地文件。"

tools = [get_weather, write_file]
prompt = ChatPromptTemplate.from_messages([
    ("system", "你是天气助手,请根据用户的问题,给出相应的天气信息,并具备将结果写入文件的能力"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response = agent_executor.invoke({"input": "查一下北京和杭州现在的温度,并将结果写入本地的文件中。"})
print(response)

The agent first calls get_weather for both cities (parallel) and then sequentially invokes write_file to store the combined result.

Using Built‑in Search Tool (Tavily)

LangChain also provides built‑in tools like TavilySearchResults for web search.

pip install langchain-tavily
from langchain_community.tools.tavily_search import TavilySearchResults
search = TavilySearchResults(max_results=2)
search.invoke("苹果2025WWDC发布会")

The call returns two search results, confirming the tool works.

Full agent integration with Tavily:

from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate
from langchain.chat_models import init_chat_model
from langchain_community.tools.tavily_search import TavilySearchResults

search = TavilySearchResults(max_results=2)
tools = [search]
prompt = ChatPromptTemplate.from_messages([
    ("system", "你是一名助人为乐的助手,并且可以调用工具进行网络搜索,获取实时信息。"),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])
model = init_chat_model(model='deepseek-chat', model_provider='deepseek', api_key='YOUR_DEEPSEEK_KEY')
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response = agent_executor.invoke({"input": "请问苹果2025WWDC发布会召开的时间是?"})
print(response)

The agent queries the web via Tavily and returns the precise event time.

Conclusion

The article demonstrates that, with the rapid advancement of large models, building functional AI agents in LangChain is straightforward: define tools, craft a prompt, create an agent with create_tool_calling_agent, and run it via AgentExecutor. Parallel, sequential, and built‑in tool integrations are all supported, enabling rapid prototyping of sophisticated AI assistants.

PythonLangChainAI AgentAgent APITavily Search
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.