Master CrewAI: Build Multi‑Agent Systems Quickly with Flows and a Full Demo

This article introduces CrewAI, a high‑level Python framework for constructing multi‑agent systems, explains its core concepts such as Crew, Agent, Tool, Task and Process, walks through a complete demo with code, evaluates its strengths and limitations, and showcases the new Flows feature for more flexible workflow orchestration.

AI Large Model Application Practice
AI Large Model Application Practice
AI Large Model Application Practice
Master CrewAI: Build Multi‑Agent Systems Quickly with Flows and a Full Demo

What Is CrewAI?

CrewAI is a high‑level Python framework built on top of LangChain, designed specifically for creating multi‑agent systems that can act autonomously and collaborate seamlessly to accomplish complex tasks.

Core Concepts

Crew – a small team composed of multiple agents.

Agent – an individual autonomous entity with a role, goal, background, tools, and an LLM. Agents can make decisions, invoke tools, and communicate with other agents.

Tool – external functions, APIs, or knowledge sources that extend an agent’s capabilities.

Task – a piece of work (e.g., generate a social‑media post, fetch the latest news) that must be completed by an agent.

Process – the strategy for assigning and executing tasks. CrewAI currently supports two process strategies: sequential and hierarchical .

Demo: Building a Simple Multi‑Agent System

The following steps show how to create tools, define agents, assemble a crew, and run a task.

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.tools import tool
from langchain_community.tools.tavily_search import TavilySearchResults
from crewai import LLM

# 1️⃣ Create tools
@tool
def email_tool(recipient: str, subject: str, body: str) -> str:
    """Send an email with the given subject and body."""
    print(f"Sending email to {recipient} with subject '{subject}'......")
    return f"Email sent to {recipient} with subject '{subject}' and body '{body}'."

@tool
def web_search_tool(query: str) -> str:
    """Search the web for the given query."""
    print(f"Searching for: {query}......")
    search = TavilySearchResults(max_results=3)
    results = search.invoke(query)
    return "
".join([r["content"] for r in results])

Define three agents – a web‑searcher, an email sender, and a manager (supervisor) agent:

# web search agent
web_searcher = Agent(
    role="网络搜索者",
    goal="根据输入的任务,判断搜索关键词,并进行搜索",
    backstory="你擅长在网上导航,找到最相关和最新的信息。",
    tools=[web_search_tool],
    verbose=True,
)

# email agent
emailer = Agent(
    role="邮件发送者",
    goal="根据输入的任务,发送精心编写的电子邮件给指定的联系人。",
    backstory="你是一个熟练的沟通者,擅长撰写清晰有效的电子邮件。",
    tools=[email_tool],
    verbose=True,
)

# project manager (supervisor) agent
manager = Agent(
    role="项目经理",
    goal="高效管理团队,确保高质量完成任务",
    backstory="你是一位经验丰富的项目经理,擅长协调团队成员的努力。",
    llm='gpt-4o-mini',
    allow_delegation=True,
)

Assemble the crew, define a task, and specify a hierarchical process:

crew = Crew(
    agents=[web_searcher, emailer],
    tasks=[Task(
        description="搜索南京的最新天气情况并发送邮件到[email protected]",
        expected_output="邮件确认信息",
    )],
    manager_agent=manager,
    process=Process.hierarchical,
    verbose=True,
)
crew.kickoff()

The execution flow shows the manager delegating the search task to the web‑searcher, receiving the result, then delegating the email task to the emailer, and finally summarising the outcome.

Limitations of the Original Process Model

Only sequential and hierarchical strategies are available – no loops, conditional branches, or complex branching.

Heavy reliance on LLM‑driven decision making can lead to unpredictability.

New Feature: CrewAI Flows

CrewAI Flows introduce an event‑driven architecture that enables the construction of more sophisticated workflows, similar to LangGraph or LlamaIndex Workflows. Key capabilities include:

Simplified workflow creation – combine multiple crews and tasks.

Flexible state management – share state between tasks easily.

Event‑driven routing – use @listen, @router, and logical operators to implement conditional logic, loops, and branching.

Example of a Supervisor flow using Flows:

import random
from crewai.flow.flow import Flow, listen, router, start, or_
from pydantic import BaseModel

class ExampleState(BaseModel):
    task: str = ""
    messages: list = []
    next_agent: str = ""

class SupervisorFlow(Flow[ExampleState]):
    @start(or_("search_method", "email_method"))
    def supervisor(self):
        print(f"
任务:{self.state.task}")
        print("
判断下一步需要使用的Agent...")
        choice = random.choice(["search", "email", "finish"])  # In practice, use LLM reasoning
        self.state.next_agent = choice

    @router(supervisor)
    def route(self):
        if self.state.next_agent == "search":
            return "search_agent"
        elif self.state.next_agent == "email":
            return "email_agent"
        else:
            return "finish"

    @listen("search_agent")
    def search_method(self):
        print("
调用Search Agent...")
        # actual search logic omitted

    @listen("email_agent")
    def email_method(self):
        print("
调用Email Agent...")
        # actual email logic omitted

    @listen("finish")
    def finish_method(self):
        print("
任务完成.")

flow = SupervisorFlow()
flow.kickoff(inputs={"task": "搜索最热门新闻并发送邮件到[email protected]"})

The flow visualises the event‑driven sequence and demonstrates how CrewAI Flows can replace the limited hierarchical process with a more expressive, condition‑aware pipeline.

Pros and Cons

Advantages

Designed natively for multi‑agent systems.

Intuitive API – agents are primarily defined via prompt engineering.

Low barrier to entry; you can spin up many agents in minutes.

Rich built‑in tools and strong compatibility with LangChain.

Good integration with LlamaIndex, allowing reuse of its extensive toolset.

Disadvantages

Complex workflow orchestration is still limited (improved but not yet on par with LangGraph).

Agent collaboration can be nondeterministic; some demos require multiple iterations.

Community support and resources are smaller than those for LangGraph or LlamaIndex.

When to Use CrewAI

CrewAI is a solid choice if you have some LLM development experience, need to prototype a multi‑agent system quickly, and your workflow does not require highly intricate branching or looping logic.

Because CrewAI integrates well with LangChain and LlamaIndex, you can combine it with those ecosystems when more advanced features are needed.

PythonLLMMulti-agentAI FrameworkCrewAIFlows
AI Large Model Application Practice
Written by

AI Large Model Application Practice

Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.