Master OpenAI’s New Agents SDK: 10 Core Concepts with a Complete Example
This guide walks you through OpenAI's open‑source Agents SDK, explaining ten essential concepts—from model configuration and agent creation to runners, tools, context handling, guardrails, handoffs, structured output, tracing, and orchestration—while providing runnable Python code and visual demos.
OpenAI has released a brand‑new open‑source Agents SDK, an independent tool for building production‑grade agents that does not depend on OpenAI models. The SDK retains a simple, easy‑to‑use design while adding many enhancements.
01 Models (Large Models)
Configure a client and a model (currently only ChatCompletions type models are supported). Example:
openai_client = AsyncOpenAI(api_key=API_KEY, base_url=BASE_URL)
model = OpenAIChatCompletionsModel(model='your-model-name', openai_client=openai_client)You can set a global client with set_default_openai_client and specify the default API type with set_default_openai_api("chat_completions"). Disable global tracing when using third‑party API keys:
set_tracing_disabled(disabled=True)02 Agents (Intelligent Agents)
An Agent is the core abstraction. The minimal configuration includes name, instructions, and model:
main_agent = Agent(
name="MainAssistant",
instructions="通过回答问题来协助用户。",
model=model
)You can clone an existing agent and modify selected attributes:
# Clone and change name/instructions
clone_agent = main_agent.clone(
name="MainAssistantClone",
instructions="通过回答问题来协助用户。但你只会说英语。"
)03 Runner (Execution)
Agents are executed via the Runner class, which offers synchronous ( run_sync) and asynchronous ( run) methods, as well as streaming execution.
# Synchronous run
result = Runner.run_sync(main_agent, "你好!")
print(f"AI: {result.final_output}")
# Streaming run
result = Runner.run_streamed(main_agent, "你好")
async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
print(event.data.delta, end="", flush=True)04 Tools (Tool Integration)
Tools extend an agent’s capabilities. Three types are supported:
Hosted tools : OpenAI cloud services such as WebSearch, FileSearch, and ComputerTool (only with OpenAI models).
Function tools : User‑defined Python functions decorated with @function_tool.
Agent tools : Wrap another Agent as a tool, enabling multi‑agent collaboration.
Example of a function tool that performs a web search using Tavily:
@function_tool
def search_web(query_str: str):
"""使用Tavily进行网络搜索并返回结果"""
...
main_agent = Agent(
...,
tools=[search_web]
)05 Context (Global State)
The Context object carries arbitrary data (commonly a dataclass or Pydantic model) through the entire run, making it accessible to agents and tools.
# Define a context type
@dataclass
class UserInfo:
UserId: str
UserName: str
# Attach context when running
result = Runner.run_sync(
main_agent,
user_input,
context=UserInfo('ID001', '张三')
)06 Structured Output
When the underlying model supports structured output, you can declare a Pydantic model and set it as output_type. The agent will return an instance of that model instead of raw text.
class Answer(BaseModel):
"""用于定义回答的数据结构"""
answer_chinese: str
answer_english: str
source: str
main_agent = Agent[UserInfo](
...,
output_type=Answer
)If the chosen model does not support structured output, the SDK falls back to plain text and may raise an error.
07 Handoffs (Agent‑to‑Agent Transfer)
Handoffs let one agent delegate a request to another. Example: a math‑specialist agent handles math questions.
math_agent = Agent(
name="MathAssistant",
instructions="你是一个数学助手,专门解答数学相关的问题",
model=model,
tools=[calculator]
)
main_agent = Agent[UserInfo](
...,
instructions=(
"通过中英文双语回答来协助用户。",
"如果询问数学问题,请交给MathAssistant。"
),
handoffs=[math_agent]
)The handoff flow: user calls start_agent → SDK evaluates the task → if it matches a handoff rule, control transfers to the target agent.
08 Guardrails (Safety Checks)
Guardrails are agents that validate inputs or outputs to prevent risky or unwanted content. Two kinds exist: input guardrails and output guardrails.
class SensitiveCheckOutput(BaseModel):
is_sensitive: bool
reasoning: str
input_guardrail_agent = Agent(
name="内容审核",
instructions="""检查用户输入是否包含政治话题…""",
model=model,
output_type=SensitiveCheckOutput
)
@input_guardrail
async def input_guardrail(ctx, agent, input) -> GuardrailFunctionOutput:
result = await Runner.run(input_guardrail_agent, input, context=ctx.context)
return GuardrailFunctionOutput(
output_info=result.final_output,
tripwire_triggered=result.final_output.is_sensitive,
)
main_agent = Agent[UserInfo](
...,
input_guardrails=[input_guardrail]
)09 Tracing (Observability)
The SDK provides built‑in tracing of events such as LLM calls, tool invocations, and handoffs. Tracing can be globally disabled with set_tracing_disabled(True) or per‑run via RunConfig(tracing_disabled=True). Third‑party tracing processors (e.g., Logfire) can be integrated:
import logfire
logfire.configure(console=False)
logfire.instrument_openai_agents()
set_trace_processors([]) # remove default processor
with trace(workflow_name="Test Workflow"):
...Tracing data is organized into Traces (full workflow runs) and Spans (individual operations).
10 Orchestrating (Workflow Composition)
Complex agent systems can be orchestrated in several patterns:
Sequential flow : output of one agent feeds the next.
Parallel flow : independent steps run concurrently via asyncio.gather.
Routing (handoffs) : a routing agent delegates tasks to specialized agents.
Supervisor : a manager agent invokes worker agents as tools.
Reflective loop : one agent generates a response, another evaluates it, and the process repeats until a condition is met.
Example of a sequential pipeline with context:
result1 = Runner.run_sync(main_agent, user_input, context=UserInfo('ID001', '张三'))
result2 = Runner.run_sync(rate_agent, result1.to_input_list() + [{"role": "user", "content": "请给出你对上述回答的评价"}])Overall, OpenAI Agents SDK offers a lightweight yet powerful foundation for building single or multi‑agent applications, with production‑grade features such as tracing and guardrails while leaving advanced orchestration to the developer.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
AI Large Model Application Practice
Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
