LangChain Tool Integration: Step‑by‑Step Guide to Built‑in and Custom Functions

This article walks through how to integrate LangChain's built‑in tools and user‑defined functions into AI agents, covering environment setup, installing dependencies, using the Python code interpreter tool, binding tools to a model, parsing tool calls with JsonOutputKeyToolsParser, and demonstrating both a data‑analysis example and a weather‑lookup function.

Fun with Large Models
Fun with Large Models
Fun with Large Models
LangChain Tool Integration: Step‑by‑Step Guide to Built‑in and Custom Functions

1. Integrating LangChain Built‑in Tools

First, install the required packages in an Anaconda environment named langchainenv:

pip install langchain-community langchain-experimental

Use the built‑in PythonAstREPLTool to run Python code on a pandas DataFrame containing city data. Load the CSV, create the tool, and invoke it to compute the mean GDP:

import pandas as pd
from langchain_experimental.tools import PythonAstREPLTool

df = pd.read_csv('global_cities_data.csv')
tool = PythonAstREPLTool(locals={"df": df})
res = tool.invoke("df['GDP_Billion_USD'].mean()")
print(res)

Bind the tool to a large language model (e.g., Qwen3‑8B via SiliconFlow) and invoke the chain:

from langchain.chat_models import init_chat_model
model = init_chat_model(
    model="Qwen/Qwen3-8B",
    model_provider="openai",
    base_url="https://api.siliconflow.cn/v1/",
    api_key=""
)
llm_with_tools = model.bind_tools([tool])
response = llm_with_tools.invoke("Calculate the average GDP_Billion_USD.")
print(response)

To extract only the tool call, use JsonOutputKeyToolsParser with the tool name:

from langchain_core.output_parsers.openai_tools import JsonOutputKeyToolsParser
parser = JsonOutputKeyToolsParser(key_name=tool.name, first_tool_only=True)
llm_chain = llm_with_tools | parser
response = llm_chain.invoke("Calculate the average GDP_Billion_USD.")
print(response)

Add a system prompt so the model knows the DataFrame df is available, then compose the chain as prompt | llm_with_tools | parser | tool to let the model automatically execute the extracted code.

2. Integrating Custom Functions as Tools

Define a weather‑lookup function and expose it to LangChain with the @tool decorator. The decorator reads the docstring to describe the function to the model.

import requests
from langchain_core.tools import tool

@tool
def get_weather(loc):
    """Query real‑time weather for a city.
    :param loc: city name (string)
    :return: JSON object with temperature and conditions.
    """
    url = "https://api.seniverse.com/v3/weather/now.json"
    params = {
        "key": "YOUR_API_KEY",
        "location": loc,
        "language": "zh-Hans",
        "unit": "c"
    }
    response = requests.get(url, params=params)
    return response.json()['results'][0]['now']

Bind the custom tool to the same model and invoke it through a prompt:

tools = [get_weather]
llm_with_tools = model.bind_tools(tools)
response = llm_with_tools.invoke("What is the weather in Beijing?")
print(response)

The model returns a tool_calls structure containing the function name and arguments. Use JsonOutputKeyToolsParser to extract the arguments, then pass them to the function via a chain:

parser = JsonOutputKeyToolsParser(key_name=get_weather.name, first_tool_only=True)
llm_chain = llm_with_tools | parser
response = llm_chain.invoke("What is the weather in Beijing?")
print(response)

# Execute the function automatically
get_weather_chain = llm_chain | get_weather
final = get_weather_chain.invoke("What is the weather in Shanghai?")
print(final)

The complete workflow demonstrates how LangChain can seamlessly combine large language models with both built‑in utilities and user‑defined functions, enabling agents to perform data analysis and external API calls automatically.

3. Summary

The integration process follows a consistent pattern: install dependencies, create a tool (built‑in or custom), bind it to a model with model.bind_tools(), optionally add a prompt template, parse the model’s tool call using JsonOutputKeyToolsParser, and finally execute the tool within the chain. This approach lays the foundation for building more complex LangChain agents that orchestrate multiple tools.

PythonAI agentsTool IntegrationLangChainFunction Calling
Fun with Large Models
Written by

Fun with Large Models

Master's graduate from Beijing Institute of Technology, published four top‑journal papers, previously worked as a developer at ByteDance and Alibaba. Currently researching large models at a major state‑owned enterprise. Committed to sharing concise, practical AI large‑model development experience, believing that AI large models will become as essential as PCs in the future. Let's start experimenting now!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.