Unlocking OpenAI Agents SDK: Core Features, Code Samples, and Framework Comparisons
This article introduces the OpenAI Agents SDK, explains its key capabilities such as Agent Loop, Handoffs, Guardrails, and Tracing, provides practical Python code examples, compares it with other multi‑agent frameworks, and discusses best practices for building reliable AI applications.
OpenAI Agents SDK Overview
OpenAI Agents SDK is a lightweight, easy‑to‑use toolkit for building agent‑based AI applications. It offers fundamental building blocks including agents with instructions and tools, handoffs for delegating tasks between agents, and guardrails for input validation.
Core Features
Agent Loop : Automatically repeats an agent’s execution until a task is completed, allowing function calls and tool usage for complex multi‑step workflows.
Handoffs : Enables one agent to delegate specific tasks to another, facilitating modular, multi‑agent collaboration in large business scenarios.
Guardrails : Performs parallel input validation before an agent runs, aborting early on failures to improve reliability and prevent unsafe behavior.
Tracing : Built‑in visual tracing of agent interactions, request/response payloads, and token usage; can be disabled for privacy or resource reasons.
Code Example – Simple Weather Agent
<code># Install dependencies
pip install openai-agents
# or `uv add openai-agents`, etc
# Set your OpenAI key
export OPENAI_API_KEY=sk-...</code> <code>from typing import TypedDict
from agents import Agent, Runner, function_tool
import asyncio
@function_tool
async def fetch_weather(city: str) -> str:
"""Fetch the weather for a given location.
Args:
city: The city to fetch the weather for.
"""
return "sunny"
weather_agent = Agent(
name="天气查询专家",
instructions="你是天气查询专家,用户输入城市名,你返回该城市的天气信息。请用简洁中文回复。",
tools=[fetch_weather]
)
async def main():
result = await Runner.run(weather_agent, input="北京")
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())
</code>Agent Attribute Details
instructions : System prompt for the agent; can be a static string or a dynamic function returning a string.
name : The agent’s identifier.
tools : List of tools the agent can invoke.
handoff_description : Description used when the agent hands off a task to another.
handoffs : List of sub‑agents that can receive delegated tasks.
model : Model to use (default is gpt‑4o).
hooks : Callbacks for various lifecycle events.
mcp_servers : Servers providing Model Context Protocol (MCP) tools.
output_guardrails : Checks run after the final output is generated.
input_guardrails : Checks run before the agent produces a response.
Multiple Agents – Weather + Dressing Example
The article demonstrates chaining a weather expert with a dressing expert, using handoffs so the dressing agent can consume weather information and suggest outfits.
Guardrail Input Example
<code>from typing import TypedDict
from agents import Agent, GuardrailFunctionOutput, InputGuardrail, Runner, function_tool
import asyncio
from pydantic import BaseModel
class DressOutput(BaseModel):
is_dressing: bool
reasoning: str
# Guardrail focusing on dressing advice
guardrail_agent = Agent(
name="Guardrail check",
instructions="请判断用户的问题是否围绕用户穿衣建议相关内容,如果问题属于上述内容,请返回 is_dressing=True,并简要说明理由。",
output_type=DressOutput,
)
</code>Running the guardrail shows whether a user query is relevant to dressing advice and can optionally terminate the workflow.
Tracing Usage
Tracing is enabled by default and can be disabled with set_tracing_disabled(True) . The OpenAI platform provides a visual trace dashboard at https://platform.openai.com/traces .
Framework Comparison
Other popular multi‑agent frameworks include LangGraph, AutoGen, LangChain, Dify, n8n, and proprietary platforms. The article provides a brief comparison and shows how to integrate MCP servers (e.g., Gaode) for external tool access.
Conclusion
Combining workflow orchestration with multi‑agent architectures is the future of AI applications. For rapid MVPs, low‑code platforms suffice, but production‑grade projects benefit from the flexibility and reliability of OpenAI Agents SDK, LangGraph, and related tools.
References
https://openai.github.io/openai-agents-python/
https://docs.dify.ai/en/introduction
https://docs.llamaindex.ai/en/stable/
https://langchain-ai.github.io/langgraph/concepts/why-langgraph/
Qunar Tech Salon
Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.