Master LlamaIndex Workflows: Build Multi‑Agent RAG Applications Step‑by‑Step
This article introduces LlamaIndex Workflows, explains its event‑driven design, walks through a multi‑agent demo that combines weather search and email sending, provides complete Python code for defining events, steps, and the orchestrator, and compares its strengths and limitations against similar frameworks.
Introduction
LlamaIndex Workflows is a newly released feature (late 2024) that offers an event‑driven framework for building complex AI workflows, similar to LangGraph but more tightly integrated with LlamaIndex’s data‑centric RAG capabilities.
Quick Start
LlamaIndex is positioned as a powerful alternative to LangChain for constructing data‑intensive RAG applications. The Workflows extension enables developers to orchestrate multi‑step, multi‑agent processes without manually defining graph edges.
Core Concepts
Workflow : Represents an executable graph of steps, analogous to a LangGraph graph.
Step : A node that receives an Event as input and returns an Event as output.
Event : The data carrier exchanged between steps; custom events can be defined alongside built‑in StartEvent and StopEvent.
Context : Shared state (similar to LangGraph’s State) that persists across steps.
Demo Implementation – Multi‑Agent Workflow
The demo builds a two‑agent system using ReActAgent for weather search and email sending, coordinated by a supervisor agent implemented with Workflows.
1. Define Tools
def search_weather(query: str) -> str:
"""Simulate a weather search."""
return "明天天气晴转多云,最高温度30度,最低温度23度。"
tool_search = FunctionTool.from_defaults(fn=search_weather)
agent_search = ReActAgent.from_tools([tool_search], llm=llm, verbose=True)
def send_email(subject: str, recipient: str, message: str) -> None:
"""Simulate sending an email."""
print(f"邮件已发送至 {recipient},主题为 {subject},内容为 {message}")
tool_send_mail = FunctionTool.from_defaults(fn=send_email)
agent_send_mail = ReActAgent.from_tools([tool_send_mail], llm=llm, verbose=True)2. Prompt Templates
class TransferToAgent(BaseModel):
"""Parameters for delegating a task to a specific agent."""
agent_name: str
agent_task: str
class TaskRecord(BaseModel):
"""Record of each step's execution."""
agent: str
input: str
result: str
agent_context_str = """
agent_search: 仅用于网络搜索天气情况。
agent_send_mail: 仅用于发送电子邮件。
"""
DEFAULT_ORCHESTRATOR_PROMPT = """
你是一个任务编排专家。
你的工作是:
1. 根据用户的输入问题与任务执行历史决定下一步是否需要AI助手
2. 根据任务历史决定下一步AI助手的任务内容与输入信息
3. 如果用户任务已经完成,则结束任务
4. 如果无需借助AI助手,则自行处理任务并输出
以下是任务描述:
{task_description}
以下是你可以选择的AI助手:
{agent_context_str}
以下是任务历史:
{task_history_str}
请决定下一步动作。
"""3. Workflow Definition
class MultiAgentWorkflow(Workflow):
def __init__(self, orchestrator_prompt: str | None = None, **kwargs: Any):
super().__init__(**kwargs)
self.orchestrator_prompt = orchestrator_prompt or DEFAULT_ORCHESTRATOR_PROMPT
self.agent_search = agent_search
self.agent_send_mail = agent_send_mail
@step
async def new_user_msg(self, ctx: Context, ev: StartEvent) -> PrepEvent:
await ctx.set("task_description", ev.input)
return PrepEvent()
@step
async def orchestrator(self, ctx: Context, ev: PrepEvent) -> StopEvent | SearchEvent | SendMailEvent:
task_description = await ctx.get("task_description")
task_history = await ctx.get("task_history", default=[])
task_history_str = "
".join(
f"AI助手:{r.agent},输入:{r.input},输出:{r.result}" for r in task_history
)
system_prompt = self.orchestrator_prompt.format(
task_description=task_description,
agent_context_str=agent_context_str,
task_history_str=task_history_str,
)
llm_input = [ChatMessage(role="system", content=system_prompt)]
tools = [get_function_tool(TransferToAgent)]
response = await llm.achat_with_tools(tools, chat_history=llm_input)
tool_calls = llm.get_tool_calls_from_response(response, error_on_no_tool_call=False)
if not tool_calls:
print(response.message.content)
return StopEvent()
tool_call = tool_calls[0]
agent_name = tool_call.tool_kwargs["agent_name"]
agent_task = tool_call.tool_kwargs["agent_task"]
if agent_name == "agent_search":
return SearchEvent(input=agent_task)
elif agent_name == "agent_send_mail":
return SendMailEvent(input=agent_task)
else:
return StopEvent()
@step
async def call_agent_search(self, ctx: Context, ev: SearchEvent) -> PrepEvent:
response = self.agent_search.chat(ev.input)
task_history = await ctx.get("task_history", default=[])
task_history.append(TaskRecord(agent="agent_search", input=ev.input, result=response.response))
await ctx.set("task_history", task_history)
return PrepEvent()
@step
async def call_agent_send_mail(self, ctx: Context, ev: SendMailEvent) -> PrepEvent:
response = self.agent_send_mail.chat(ev.input)
task_history = await ctx.get("task_history", default=[])
task_history.append(TaskRecord(agent="agent_send_mail", input=ev.input, result=response.response))
await ctx.set("task_history", task_history)
return PrepEvent()4. Running the Workflow
workflow = MultiAgentWorkflow(timeout=None)
async def main():
res = await workflow.run(input="查询明天北京的天气情况,发送给[email protected]")
if __name__ == "__main__":
import asyncio
asyncio.run(main())The execution prints colored logs showing the orchestrator’s decisions, the search agent’s weather result, and the email agent’s confirmation. Changing the input to a poetry request demonstrates the orchestrator’s ability to generate free‑form text without invoking external tools.
Feature Summary and Comparison
Compared with other multi‑agent orchestration frameworks (Swarm, LangGraph, CrewAI, AutoGen), LlamaIndex Workflows shares the following traits:
General‑purpose LLM development foundation, not limited to multi‑agent scenarios.
Provides both simple agents and complex workflow agents as building blocks.
Rich component ecosystem and strong third‑party compatibility.
Highly customizable to fit virtually any application.
Steeper learning curve than CrewAI/AutoGen.
Offers commercial services such as LlamaCloud, LlamaDeploy, and LangSmith.
Key differences:
LlamaIndex is generally easier to pick up.
Its core framework is more RAG‑friendly.
Workflows use an event‑driven model, whereas LangGraph relies on explicit graph definitions.
Overall, LlamaIndex Workflows is recommended for developers who need a flexible, extensible multi‑agent system with strong RAG support while avoiding the higher complexity of LangChain.
AI Large Model Application Practice
Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
