Mastering Agent Tool Use: Adding Search, Time, and Calculator Functions
This tutorial extends a minimal LLM Agent loop by introducing Tool Use (function calling) to give the agent actionable capabilities—searching the web, retrieving the current datetime, and performing mathematical calculations—while explaining the BaseTool architecture, registration process, system‑prompt adjustments, and practical execution examples.
1. Review of the Minimal Agent Loop
The previous lesson implemented a minimal agent loop consisting of a message container, a tool schema, an execute_tool dispatcher, and a single terminate tool. The agent could only signal completion and could not perform any real actions.
2. Why Tool Use Is Needed
Without external tools an agent cannot:
Fetch up‑to‑date information beyond its training cutoff.
Perform reliable arithmetic (LLMs often make mistakes).
Obtain real‑time data such as the current time, weather, or stock prices.
Tool Use (function calling) solves the agent’s actionability problem by letting the LLM decide when to invoke a concrete tool.
3. Goal of This Lesson
Three practical tools are added to the agent: search – web search for current information (free Tavily API, API key in .env). datetime – retrieve the current date and time. calculator – evaluate a mathematical expression.
Together with the existing terminate tool the agent now has four usable functions.
4. Tool Definition Details
4.1 BaseTool Class
class BaseTool(ABC):
@property
def name(self) -> str: ... # tool name
@property
def description(self) -> str: ... # tool description
def execute(self, **kwargs) -> tuple[bool, str]: ... # execution logic
def schema(self) -> dict:
return {
"type": "function",
"function": {
"name": self.name,
"description": self.description,
"parameters": self._parameters_schema(),
},
}4.2 SearchTool
class SearchTool(BaseTool):
@property
def name(self) -> str:
return "search"
@property
def description(self) -> str:
return "Search the web for current information..."
def _parameters_schema(self) -> dict:
return {
"type": "object",
"properties": {
"query": {"type": "string", "description": "The search query."},
"max_results": {"type": "integer", "description": "Max results.", "default": 5},
},
"required": ["query"],
}
def execute(self, **kwargs) -> tuple[bool, str]:
client = TavilyClient(api_key=os.getenv("TAVILY_KEY"))
results = client.search(query=kwargs["query"])
return False, json.dumps(results)4.3 DateTimeTool
class DateTimeTool(BaseTool):
@property
def name(self) -> str:
return "datetime"
@property
def description(self) -> str:
return "Get the current date and time..."
def execute(self, **kwargs) -> tuple[bool, str]:
return False, datetime.now().strftime("%Y-%m-%d %H:%M:%S")This tool has no parameters because it always returns the current timestamp.
4.4 CalculatorTool
class CalculatorTool(BaseTool):
@property
def name(self) -> str:
return "calculator"
@property
def description(self) -> str:
return "Evaluate a mathematical expression. Use '**' for power."
def _parameters_schema(self) -> dict:
return {
"type": "object",
"properties": {
"expression": {"type": "string", "description": "e.g., '2**10', 'sqrt(16)'"},
},
"required": ["expression"],
}
def execute(self, **kwargs) -> tuple[bool, str]:
# safety checks + eval
result = eval(kwargs["expression"], {"__builtins__": {}})
return False, str(result)Pitfall: eval() must be sandboxed; only allow safe characters.
5. Tool Registration and Execution
All tools are instantiated and stored in a dictionary for lookup by name.
from tools import SearchTool, DateTimeTool, CalculatorTool, TerminateTool
_search = SearchTool()
_datetime = DateTimeTool()
_calculator = CalculatorTool()
_terminate = TerminateTool()
TOOL_REGISTRY = {
"search": _search,
"datetime": _datetime,
"calculator": _calculator,
"terminate": _terminate,
}
def execute_tool(name: str, arguments: dict) -> tuple[bool, str]:
tool = TOOL_REGISTRY.get(name)
if tool:
return tool.execute(**arguments)
raise RuntimeError(f"Unknown tool: {name}")The returned tuple (should_stop, output_text) tells the loop whether to terminate ( should_stop=True) or continue ( should_stop=False).
5.1 Batch Tool Calls
Some LLMs (e.g., OpenAI) may return multiple tool_calls in one response. The loop processes each call, collects results, and respects the should_stop flag.
if tool_calls:
for call in tool_calls:
name = call["function"]["name"]
args = json.loads(call["function"]["arguments"])
should_stop, output = execute_tool(name, args)
messages.append({
"role": "user",
"content": f"[TOOL_CALL {name}] {json.dumps(args)}
[TOOL_RESULT] {output}",
})
if should_stop:
return output5.2 Tool Choice Strategies
The tool_choice parameter controls how the model selects tools: "auto": model decides autonomously. "required": forces the model to call a tool.
Explicit function spec (e.g., {"type": "function", "function": {"name": "terminate"}}) forces a particular tool.
5.3 Error Handling
def execute_tool(name: str, arguments: dict) -> tuple[bool, str]:
try:
tool = TOOL_REGISTRY.get(name)
if tool:
return tool.execute(**arguments)
except Exception as e:
return False, f"Error: {str(e)}"
return False, f"Unknown tool: {name}"Network timeouts (especially for search) must be caught; otherwise the agent crashes.
6. System Prompt Adjustments
def _system_prompt(self) -> str:
return (
"You are a helpful AI Agent.
"
"You have access to several tools:
"
"- `search(query: string)` - Search the web for current information
"
"- `datetime()` - Get the current date and time
"
"- `calculator(expression: string)` - Evaluate a mathematical expression
"
"- `terminate(final: string)` - End the agent loop
"
"Rules:
"
"1) Use tools to gather information when needed.
"
"2) After using a tool, analyze the result and decide the next step.
"
"3) When you have the final answer, call `terminate`."
)During debugging, using "required" makes tool calls explicit; in production "auto" lets the model decide.
7. Code Organization
7.1 Directory Structure
exercise/
├── lib/ # shared utilities
│ ├── env.py
│ ├── openai_compat.py
│ └── log.py
├── 02_tool_use/
│ ├── agent.py # core agent logic
│ ├── main.py # entry point
│ └── tools/ # tool modules
│ ├── __init__.py
│ ├── base.py
│ ├── registry.py
│ ├── terminate.py
│ ├── datetime.py
│ ├── calculator.py
│ └── search.py
└── .env # API configuration7.2 Tools Package
tools/__init__.pyre‑exports all tools and the registry:
from .base import BaseTool
from .terminate import TerminateTool
from .datetime import DateTimeTool
from .calculator import CalculatorTool
from .search import SearchTool
from .registry import TOOL_REGISTRY
__all__ = ["BaseTool", "TerminateTool", "DateTimeTool", "CalculatorTool", "SearchTool", "TOOL_REGISTRY"] tools/registry.pybuilds the dictionary:
from .search import SearchTool
from .datetime import DateTimeTool
from .calculator import CalculatorTool
from .terminate import TerminateTool
TOOL_REGISTRY = {
"search": SearchTool(),
"datetime": DateTimeTool(),
"calculator": CalculatorTool(),
"terminate": TerminateTool(),
}7.3 Agent Integration
from tools import TOOL_REGISTRY
tools = [tool.schema() for tool in TOOL_REGISTRY.values()]
def execute_tool(name: str, arguments: dict) -> tuple[bool, str]:
tool = TOOL_REGISTRY.get(name)
if tool:
return tool.execute(**arguments)
raise RuntimeError(f"Unknown tool: {name}")7.4 Running Examples
$ uv run python 02_tool_use/main.py --task "python tutorial"
# Step 1: model calls search
# Step 1: tool returns search results
# Step 2: model calls terminate
# Final answer: [python tutorial content]
$ uv run python 02_tool_use/main.py --task "2 ** 10"
# Step 1: model calls calculator
# Step 1: tool returns 1024
# Step 2: model calls terminate
$ uv run python 02_tool_use/main.py --task "what time is it?"
# Step 1: model calls datetime
# Step 1: tool returns 2026-02-15 20:30:00
# Step 2: model calls terminate8. Core Takeaways
Tool Use gives an agent "limbs" to interact with the external world.
Tool definitions (schema, description, parameters) tell the LLM when to invoke each function.
Tool execution is ordinary Python function calls returning (should_stop, output).
The system prompt must enumerate available tools and the rules for their use.
Open‑source repository: https://github.com/HUANGLIWEN/mini-manus
AI Tech Publishing
In the fast-evolving AI era, we thoroughly explain stable technical foundations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
