Understanding ADK Multi‑Agent Orchestration: SequentialAgent, ParallelAgent, and LoopAgent Explained
The article explains ADK's three core orchestration modes—SequentialAgent for ordered pipelines, ParallelAgent for independent concurrent tasks, and LoopAgent for iterative quality‑control loops—detailing their suitable scenarios, state‑flow mechanisms, and how to build a complete order‑to‑delivery workflow without writing explicit orchestration code.
Three Core Modes: Sequential, Parallel, Loop
ADK provides three orchestration patterns that map to real‑world business flows. Each pattern combines multiple LLM agents into a workflow where the process is declared once and state is automatically passed between agents.
Mode 1: SequentialAgent – Pipeline
Use when steps must execute in a strict order. The output of each agent is stored under an output_key and referenced by the next agent via a placeholder.
Receive Order → Check Inventory → Schedule Production → Verify Quality → Ship from google.adk.agents import LlmAgent, SequentialAgent
order_receiver = LlmAgent(
name="order_receiver",
model="gemini-3-flash-preview",
instruction="Extract order details from the customer request.",
output_key="order_details"
)
availability_checker = LlmAgent(
name="availability_checker",
model="gemini-3-flash-preview",
instruction="""Check if items are available based on inventory.
Input: {order_details}
Respond with availability assessment.""",
output_key="availability_status"
)
production_scheduler = LlmAgent(
name="production_scheduler",
model="gemini-3-flash-preview",
instruction="""Schedule production given order and availability.
Order: {order_details}
Availability: {availability_status}
Create a production schedule.""",
output_key="production_schedule"
)
quality_checker = LlmAgent(
name="quality_checker",
model="gemini-3-flash-preview",
instruction="""Verify production schedule meets quality standards.
Schedule: {production_schedule}
Approve or flag for revision.""",
output_key="quality_approval"
)
order_pipeline = SequentialAgent(
name="order_pipeline",
sub_agents=[order_receiver, availability_checker, production_scheduler, quality_checker]
)
runner = Runner(agent=order_pipeline, session_service=InMemorySessionService())
result = runner.send_message(session_id="order-001", message="Customer wants 50 units of Widget A, needs delivery by March 28")
print(result)The output_key parameter is crucial because it names each agent's output, allowing subsequent agents to reference it with the {placeholder} syntax.
Mode 2: ParallelAgent – Department Collaboration
Apply when multiple independent tasks can progress simultaneously. All sub‑agents start together and the workflow continues after every agent returns.
Customer Request
├─ Check Pricing (independent)
├─ Check Inventory (independent)
├─ Check Certifications (independent)
└─ [Gather results] → Make decision from google.adk.agents import LlmAgent, ParallelAgent
pricing_checker = LlmAgent(
name="pricing_checker",
model="gemini-3-flash-preview",
instruction="""Determine the best pricing for the requested item.
Item: {order_details}
Return pricing options.""",
output_key="pricing"
)
inventory_checker = LlmAgent(
name="inventory_checker",
model="gemini-3-flash-preview",
instruction="""Check warehouse inventory for availability.
Item: {order_details}
Return availability by location.""",
output_key="inventory"
)
compliance_checker = LlmAgent(
name="compliance_checker",
model="gemini-3-flash-preview",
instruction="""Verify compliance requirements.
Item: {order_details}
Return compliance status.""",
output_key="compliance"
)
evaluation = ParallelAgent(
name="order_evaluation",
sub_agents=[pricing_checker, inventory_checker, compliance_checker]
)
decision_maker = LlmAgent(
name="decision_maker",
model="gemini-3-flash-preview",
instruction="""Make a decision on the order.
Pricing: {pricing}
Inventory: {inventory}
Compliance: {compliance}
Decide: proceed or reject. Explain reasoning.""",
output_key="decision"
)
full_flow = SequentialAgent(
name="order_flow",
sub_agents=[evaluation, decision_maker]
)ParallelAgent launches all sub‑agents concurrently; the overall duration is bounded by the slowest task, offering significant speed‑up for independent work, but it also increases resource consumption and API rate‑limit considerations.
Mode 3: LoopAgent – Quality‑Control Loop
Use when a process must repeat until a condition is satisfied.
Produce Draft → Review → Approve?
├─ No → Revise → Review → Approve?
└─ Yes → Complete from google.adk.agents import LlmAgent, LoopAgent
content_producer = LlmAgent(
name="content_producer",
model="gemini-3-flash-preview",
instruction="""Generate content based on requirements.
Topic: {topic}
Iteration: {iteration}
Produce high-quality content.""",
output_key="content"
)
quality_reviewer = LlmAgent(
name="quality_reviewer",
model="gemini-3-flash-preview",
instruction="""Review the content and decide if it's good enough.
Content: {content}
Respond with: APPROVED or NEEDS_REVISION with specific feedback.""",
output_key="review"
)
content_loop = LoopAgent(
name="content_review_loop",
sub_agents=[content_producer, quality_reviewer],
max_iterations=3, # prevent infinite loops
stop_condition=lambda output: "APPROVED" in output.get("review", "")
)LoopAgent is powerful but risky; always set max_iterations to bound the loop, and provide a boolean‑returning stop_condition that ends the cycle when satisfied.
State Management: Data Flow Between Agents
The core of ADK orchestration is automatic state transfer via output_key and placeholder syntax.
agent_a = LlmAgent(
name="agent_a",
instruction="Extract customer name from request.",
output_key="customer_name"
)
agent_b = LlmAgent(
name="agent_b",
instruction="Create a welcome message for {customer_name}."
)When agent_a finishes, its result is stored under customer_name. ADK then substitutes {customer_name} into agent_b 's instruction before execution.
Nested workflows follow the same mechanism:
inner_workflow = SequentialAgent(
name="inner",
sub_agents=[agent_a, agent_b],
output_key="welcome_message"
)
outer_workflow = SequentialAgent(
name="outer",
sub_agents=[some_initial_agent, inner_workflow, final_agent]
)Good practice: name output_key values with clear, snake_case identifiers and document their meaning.
Full Example: Order‑to‑Delivery Pipeline
A seven‑stage pipeline demonstrates how to combine the three modes to process an order from receipt to confirmation without any imperative glue code.
from google.adk.agents import LlmAgent, SequentialAgent, ParallelAgent, Agent, FunctionTool
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
# Stage 1: Receive order
order_receiver = LlmAgent(
name="order_receiver",
model="gemini-3-flash-preview",
instruction="""Parse the customer order and extract:
- Item SKU
- Quantity
- Requested delivery date
- Customer contact info
Output as structured data.""",
output_key="parsed_order"
)
# Stage 2: Parallel checks (inventory & pricing)
def check_inventory_tool(sku: str, quantity: int) -> dict:
"""检查是否有库存。"""
stock = {"SKU-001": 500, "SKU-002": 45}
available = stock.get(sku, 0)
return {"sku": sku, "requested": quantity, "available": available, "can_fulfill": available >= quantity}
def get_pricing_tool(sku: str, quantity: int) -> dict:
"""获取当前定价。"""
unit_prices = {"SKU-001": 12.50, "SKU-002": 25.00}
price_per_unit = unit_prices.get(sku, 0)
total = price_per_unit * quantity
return {"sku": sku, "unit_price": price_per_unit, "quantity": quantity, "total_price": total}
inventory_checker = LlmAgent(
name="inventory_checker",
model="gemini-3-flash-preview",
instruction="""Check inventory for the requested item.
Order: {parsed_order}
Use the check_inventory_tool to verify stock.""",
tools=[FunctionTool(check_inventory_tool)],
output_key="inventory_check"
)
pricing_agent = LlmAgent(
name="pricing_agent",
model="gemini-3-flash-preview",
instruction="""Determine pricing for the order.
Order: {parsed_order}
Use the get_pricing_tool to calculate the total.""",
tools=[FunctionTool(get_pricing_tool)],
output_key="pricing_info"
)
evaluation = ParallelAgent(name="evaluation", sub_agents=[inventory_checker, pricing_agent])
# Stage 3: Decision
approval_agent = LlmAgent(
name="approval_agent",
model="gemini-3-flash-preview",
instruction="""Decide if we can fulfill the order.
Order: {parsed_order}
Inventory: {inventory_check}
Pricing: {pricing_info}
Respond with: APPROVED or REJECTED with reasoning.""",
output_key="approval"
)
# Stage 4: Production (after approval)
production_scheduler = LlmAgent(
name="production_scheduler",
model="gemini-3-flash-preview",
instruction="""Schedule production for the approved order.
Order: {parsed_order}
Approval: {approval}
Create a production schedule with delivery date.""",
output_key="production_schedule"
)
# Stage 5: Quality check
quality_checker = LlmAgent(
name="quality_checker",
model="gemini-3-flash-preview",
instruction="""Verify production schedule meets quality standards.
Schedule: {production_schedule}
Approve if acceptable, flag issues otherwise.""",
output_key="quality_status"
)
# Stage 6: Shipping (depends on quality)
shipping_agent = LlmAgent(
name="shipping_agent",
model="gemini-3-flash-preview",
instruction="""Schedule shipping for the approved order.
Order: {parsed_order}
Production: {production_schedule}
Quality: {quality_status}
Create a shipping manifest.""",
output_key="shipping_manifest"
)
# Stage 7: Confirmation
confirmation_agent = LlmAgent(
name="confirmation_agent",
model="gemini-3-flash-preview",
instruction="""Generate a confirmation message for the customer.
Order: {parsed_order}
Pricing: {pricing_info}
Shipping: {shipping_manifest}
Write a professional confirmation email.""",
output_key="confirmation"
)
order_to_delivery = SequentialAgent(
name="order_to_delivery_pipeline",
sub_agents=[order_receiver, evaluation, approval_agent, production_scheduler, quality_checker, shipping_agent, confirmation_agent]
)
runner = Runner(agent=order_to_delivery, session_service=InMemorySessionService())
customer_request = """I'd like to order 100 units of SKU-001.
I need delivery by March 28.
Contact me at [email protected]."""
result = runner.send_message(session_id="order-2026-0847", message=customer_request)
print(result)State is passed between stages via placeholders such as {parsed_order} and {inventory_check}. ParallelAgent runs inventory and pricing checks concurrently; each agent handles a single decision or action, and the whole flow is defined declaratively.
CustomAgent: When Pre‑defined Modes Aren't Enough
For conditional branching or error‑recovery scenarios, developers can subclass CustomAgent to implement full control logic.
from google.adk.agents import CustomAgent, LlmAgent
class SmartRouter(CustomAgent):
"""Route orders to simple or complex fulfillment paths based on LLM analysis."""
def __init__(self):
self.simple_path = SequentialAgent(name="simple_fulfillment", sub_agents=[...simple agents...])
self.complex_path = SequentialAgent(name="complex_fulfillment", sub_agents=[...complex agents...])
self.router = LlmAgent(
name="order_router",
instruction="""Analyze the order and decide: SIMPLE or COMPLEX.
Order: {order}
Simple orders: standard items, normal quantities, no customization.
Complex orders: custom requests, bulk, integration required."""
)
async def execute(self, session, context):
router_output = await self.router.execute(session, context)
if "SIMPLE" in router_output:
result = await self.simple_path.execute(session, context)
else:
result = await self.complex_path.execute(session, context)
return resultThe SmartRouter first uses an LLM to classify order complexity, then dispatches the order to either a simple or a complex fulfillment workflow, demonstrating how custom logic can be blended with ADK's declarative agents.
Conclusion
The example builds a digital company composed of seven specialized agents that collaborate through structured state transfer, eliminating glue code and imperative orchestration. However, the current design is static—team structures are fixed at definition time. The next step is dynamic orchestration, where agents can be created on‑demand, roles reassigned, and collaboration topologies adjusted based on load.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
DeepHub IMBA
A must‑follow public account sharing practical AI insights. Follow now. internet + machine learning + big data + architecture = IMBA
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
