Mastering AI Function Calling: Turn LLMs into Actionable Assistants

Function Calling lets large language models invoke external tools or APIs during a conversation, transforming them from passive responders into proactive assistants; this guide explains the concept, workflow, and practical implementations with weather, parallel queries, and stock price examples using OpenAI’s Python SDK.

Qborfy AI
Qborfy AI
Qborfy AI
Mastering AI Function Calling: Turn LLMs into Actionable Assistants

What is Function Calling?

Function Calling (also called tool calling) is a capability that enables a large language model (LLM) to generate a structured instruction to invoke an external function or service when the model cannot answer directly, such as retrieving real‑time data or performing complex calculations.

How it works

The process consists of two distinct calls:

First call : The model analyses the user intent, decides which function to call, and returns a function_call object containing the function name and JSON‑encoded arguments.

Second call : Your program executes the indicated function, captures the result, and feeds the result back to the model, which then incorporates the data into a natural‑language response.

This two‑step loop allows the LLM to act as a planner while your code performs the actual work.

Example 1: Weather Assistant

# 1. Define the callable tool
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "获取指定城市的当前天气",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "城市名称,例如:北京"}
                },
                "required": ["location"]
            }
        }
    }
]

# 2. User query
messages = [{"role": "user", "content": "北京天气怎么样?"}]

# 3. First model call – returns a function_call instruction
# (e.g. {"name": "get_current_weather", "arguments": "{\"location\": \"北京\"}"})

# 4. Execute the real weather API (simulated here)
def get_current_weather(params):
    location = params.get("location")
    return f"{location} 当前天气晴,温度25摄氏度"

# 5. Append the function result and call the model again to generate the final reply

Example 2: Parallel Queries (Advanced Technique)

User question : “同时查询北京和上海的天气。” Implementation : Set parallel_tool_calls=True in the request. The model emits two separate function_call objects, your program runs both weather API calls concurrently, then returns both results in a single list for the second model pass.

Example 3: Stock Price Query

# Tool definition for stock price
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_stock_price",
            "description": "获取指定股票的实时价格",
            "parameters": {
                "type": "object",
                "properties": {"symbol": {"type": "string", "description": "股票代码"}},
                "required": ["symbol"]
            }
        }
    }
]

# When the user asks “青岛啤酒的股价是多少?”, the model returns a call to get_stock_price.
# Your backend contacts a financial data API, obtains the price, and the model crafts a friendly answer.

Practical Implementation Details

Core library : OpenAI Python SDK (compatible with many Chinese LLM providers that follow the OpenAI API schema).

Key parameters : tools – list of function definitions that the model may call. tool_choice – controls whether the model decides automatically ("auto"), is forced to call a specific function, or is prohibited from calling any tool ("none"). parallel_tool_calls=True – enables simultaneous generation of multiple function calls for multi‑part queries.

Cold Knowledge

Models have no hands : The LLM only generates a plan; your program executes the plan.

Description matters : The model’s decision to call a function heavily depends on the clarity of the description field.

Controllable thinking : Using tool_choice you can force a function call (e.g., {"type": "function", "function": {"name": "get_weather"}}) or disable tool usage entirely.

Relation to MCP : Function Calling is a lightweight tool‑calling mechanism, while Model Context Protocol (MCP) provides a more standardized, enterprise‑grade toolbox for complex AI agents.

References

OpenAI Function Calling official documentation: https://platform.openai.com/docs/guides/function-calling

Alibaba Cloud Baichuan Platform Function Calling guide: https://help.aliyun.com/zh/model-studio/qwen-function-calling

DeepSeek Function Calling documentation: https://api-docs.deepseek.com/zh-cn/guides/tool_calls

Function Calling diagram
Function Calling diagram
PythonLLMTool IntegrationChatbotOpenAI SDKAI Function Calling
Qborfy AI
Written by

Qborfy AI

A knowledge base that logs daily experiences and learning journeys, sharing them with you to grow together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.