Exploring OpenAI Swam: A Minimalist Multi‑Agent Orchestration Framework

This article introduces the concept of multi‑agent systems, compares five popular orchestration frameworks, and provides a step‑by‑step tutorial for building and testing a simple supervision‑based workflow using OpenAI's experimental Swam library, complete with code snippets and performance observations.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Exploring OpenAI Swam: A Minimalist Multi‑Agent Orchestration Framework

What is a Multi‑Agent System?

Multi‑Agent Systems (MAS) decompose complex tasks into several specialized AI agents that collaborate to achieve higher quality, faster, and more reliable results. Typical advantages include division of labor, parallel processing, transparent workflows, flexibility, fault tolerance, and easier scaling.

Key Design Questions for MAS Orchestration

Define agent roles and organizational structure.

Specify each agent’s capabilities, knowledge, and responsibilities.

Design the workflow that coordinates agents to meet a shared goal, deciding which agents communicate directly, which tasks run in parallel, and whether a supervisory agent is needed.

Optimize performance and choose appropriate models for each task.

Five Common MAS Orchestration Frameworks

OpenAI Swam – lightweight, experimental, limited to OpenAI models.

Microsoft AutoGen – built for multi‑agent dialogue and collaboration.

LangChain LangGraph – graph‑based workflow representation for complex RAG and agent applications.

CrewAI – role‑playing design with concise agent, tool, and task definitions.

LlamaIndex Workflows – event‑driven architecture for dynamic RAG and agent pipelines.

Deep Dive into OpenAI Swam

Swam is the most lightweight among the five frameworks. Its core concepts are:

Agents : a combination of an instruction, an LLM (e.g., gpt‑4o), and optional functions.

Handoffs : functions that transfer control from one agent to another, enabling collaboration.

Context Variables : shared state passed between agents, similar to LangGraph’s State.

Implementation Example

The following Python code demonstrates a simple supervision workflow where a Supervisor Agent delegates tasks to a Search Agent and a Mail Agent. The agents use TavilySearchResults for web search and a mock mail‑sending function.

from swarm import Agent
from langchain_community.tools.tavily_search import TavilySearchResults

def search_web(query: str) -> str:
    """Search the web using Tavily and return concatenated results."""
    search = TavilySearchResults(max_results=3)
    results = search.invoke(query)
    return "
".join([r["content"] for r in results])

def mail_tool(context_variables, subject: str, body: str, recipient: str = None) -> str:
    """Mock email sender; uses context variable if recipient not provided."""
    recipient = recipient or context_variables["email"]
    print(f"Sending email to {recipient}, subject: {subject}, body: {body}")
    return f"Sent email to {recipient} with subject '{subject}' and body '{body}'"

search_agent = Agent(
    name="Search Agent",
    model="gpt-4o-mini",
    instructions="You are an AI assistant that can answer questions and search the web.",
    functions=[search_web]
)

mail_agent = Agent(
    name="Mail Agent",
    model="gpt-4o-mini",
    instructions="You are an AI assistant that can send emails.",
    functions=[mail_tool]
)

supervisor_agent = Agent(
    name="Supervisor Agent",
    model="gpt-4o-mini",
    instructions="Evaluate the user query, split tasks, and decide which AI assistant should handle the first step.",
    parallel_tool_calls=False
)

Next, we define handoff functions that allow the supervisor to transfer control to the appropriate agent and let each worker hand the task back when it cannot proceed.

def transfer_back_to_triage():
    """Return control to the supervisor when the current agent cannot handle the request."""
    return supervisor_agent

def transfer_to_search():
    """Hand off the request to the Search Agent."""
    return search_agent

def transfer_to_mail():
    """Hand off the request to the Mail Agent."""
    return mail_agent

supervisor_agent.functions = [transfer_to_search, transfer_to_mail]
search_agent.functions.append(transfer_back_to_triage)
mail_agent.functions.append(transfer_back_to_triage)

Finally, we run a demo loop to interact with the workflow:

from swarm.repl import run_demo_loop
if __name__ == "__main__":
    run_demo_loop(
        supervisor_agent,
        stream=False,
        context_variables={"email": "[email protected]"}
    )

Observed Behavior

In the first task, the supervisor hands the query to the Search Agent, which returns the final answer. In the second task, the Search Agent hands control back to the supervisor, which then delegates to the Mail Agent to send an email. A more complex task demonstrates a chain of handoffs between agents, clearly visualized in the accompanying screenshots.

Pros and Cons of Swam

Pros : extremely simple API, easy to understand source code, quick prototyping for low‑complexity workflows.

Cons : limited customizability, relies heavily on the underlying LLM’s reasoning, weak native support for complex workflows, and minimal community resources.

Despite its limitations, Swam offers a lightweight entry point for developers who need to stitch together a few independent tasks without the overhead of larger frameworks. Its source code is concise and can be extended for deeper customization.

Next Steps

The next article in this series will implement the same supervision workflow using the other four frameworks for a side‑by‑side comparison.

PythonLLMOpenAImulti-agent systemsSwam
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.