Mastering AutoGen 0.4: Build Multi‑Agent Tools with Python and MCP
This article walks through the major changes in Microsoft AutoGen 0.4, explains its layered modular architecture and event‑driven multi‑agent design, details the built‑in Tools types, and provides step‑by‑step Python code for creating a Tools Agent and integrating it with an MCP server.
What’s new in AutoGen 0.4
AutoGen 0.4 is a completely rewritten, non‑backward‑compatible version that focuses on multi‑agent development and can be combined with LangGraph or LlamaIndex for flexible pipelines.
Key architectural changes
Layered, optional modular architecture – use the low‑level AutoGen‑Core for maximum control or the high‑level AgentChat for quick prototyping.
Event‑driven multi‑agent model – each Agent is an asynchronous message‑responding object managed by a Runtime, enabling loosely coupled, scalable distributed systems.
Tools and their types
AutoGen 0.4 introduces a unified FunctionTool base class and several ready‑made extensions:
PythonCodeExecutionTool – runs Python code locally or in Docker.
GraphRAGTool – integrates Microsoft GraphRAG for retrieval.
LangChainTool – adapts LangChain tools to AutoGen.
HttpTool – performs HTTP requests.
MCPTool – wraps MCP Server tools for AutoGen.
Creating a Tools Agent with AutoGen‑Core
First, prepare three tools (two custom functions and a Python executor):
web_search_tool = FunctionTool(web_search, description="用于使用输入的关键词信息执行网页搜索.")
send_mail_tool = FunctionTool(send_mail, description="用于发送电子邮件到指定的邮件地址.")
docker_executor = LocalCommandLineCodeExecutor()
python_tool = PythonCodeExecutionTool(executor=docker_executor)
tools = [web_search_tool, send_mail_tool, python_tool]Define a ToolUseAgent that derives from RoutedAgent and implements a message handler. The handler forwards user messages to the model, then calls the helper tool_agent_caller_loop which orchestrates the LLM‑to‑Tool request/response cycle:
class ToolUseAgent(RoutedAgent):
def __init__(self):
super().__init__("A simple agent")
self._system_messages = [SystemMessage(content=f"You are a helpful AI assistant. 当前时间是:{datetime.now()}")]
self._model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
self._model_context = BufferedChatCompletionContext(buffer_size=5)
self._tool_agent_id = AgentId("tool_executor_agent", self.id.key)
@message_handler
async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:
user_message = UserMessage(content=message.content, source="user")
await self._model_context.add_message(user_message)
messages = await tool_agent_caller_loop(
self,
model_client=self._model_client,
input_messages=self._system_messages + await self._model_context.get_messages(),
tool_agent_id=self._tool_agent_id,
tool_schema=tools,
cancellation_token=ctx.cancellation_token,
)
await self._model_context.add_message(AssistantMessage(content=messages[-1].content, source=self.metadata["type"]))
return Message(content=messages[-1].content)Register the ToolUseAgent and the corresponding ToolAgent with a runtime, then start an interactive loop:
runtime = SingleThreadedAgentRuntime()
await ToolUseAgent.register(runtime, "my_agent", lambda: ToolUseAgent())
await ToolAgent.register(runtime, "tool_executor_agent", lambda: ToolAgent("", tools))
runtime.start()
while True:
user_input = input("
-------------
请输入您的问题(输入'q'结束对话):")
if user_input.lower() == 'q':
break
message = Message(user_input)
response = await runtime.send_message(message, AgentId("my_agent", "default"))
print("
AI助手:", response.content)
await runtime.stop()Integrating MCP Server tools
Replace the custom tools with MCP‑provided ones. Import the MCP extension and create a StdioServerParams pointing to the file‑system server, then obtain the tool list via mcp_server_tools:
from autogen_ext.tools.mcp import StdioServerParams, mcp_server_tools
async def get_mcp_tools():
desktop = str(Path.home() / "Desktop/mcptest")
server_params = StdioServerParams(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", desktop],
)
tools = await mcp_server_tools(server_params)
return tools
mcp_tools = await get_mcp_tools()
tools = [*mcp_tools, python_tool]Running the same agent with these tools lets the LLM manage the local file system (preferably inside Docker). The demo output shows file‑system operations performed via the MCP tool.
Conclusion
AutoGen 0.4’s modular, event‑driven design and its rich set of Tools—including the MCP extension—make it straightforward to build agents with diverse capabilities. Future articles will explore how multiple agents can collaborate through message passing.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
AI Large Model Application Practice
Focused on deep research and development of large-model applications. Authors of "RAG Application Development and Optimization Based on Large Models" and "MCP Principles Unveiled and Development Guide". Primarily B2B, with B2C as a supplement.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
