Build and Test a Multi‑Agent AI System with MetaGPT
This guide walks through the MetaGPT framework—explaining its multi‑agent architecture, core concepts, predefined roles, team setup, environment preparation, installation, configuration, and troubleshooting steps—so you can quickly build, run, and validate a collaborative AI software‑company simulation.
Project Overview
MetaGPT is an open‑source multi‑agent framework that models large language models (LLMs) as a software company. It assigns specialized roles—Product Manager, Architect, Project Manager, Engineer, QA Engineer, and Searcher—to automate complex software development tasks such as user‑story generation, requirement analysis, API design, and code production.
Core Concepts
An Agent is defined as LLM + Observation + Thought + Action + Memory. Observation gathers signals from the environment, Thought processes those signals, Action executes commands (e.g., code generation, tool use), and Memory stores past experiences. Agents operate within an environment , follow a standard operating procedure (SOP), communicate, and exchange resources via an economy model.
Predefined Roles
Role (base class) : common attributes shared by all agents.
Architect : designs system architecture; key actions include WriteDesign.
ProjectManager : decomposes PRD/technical design into tasks; key action WriteTasks.
ProductManager : creates product requirements; actions include PrepareDocuments and WritePRD.
Engineer : writes and tests code; actions include WriteCode, WriteTest, FixBug.
QaEngineer : ensures code quality; actions include WriteTest, RunCode, DebugError.
Searcher : provides search services; action SearchAndSummarize.
Team Construction
To build a functional team, define each role’s expected actions, enforce SOPs, and instantiate a team with the following parameters: roles: list of role instances. environment: shared context for agents. idea: initial user task or prompt. investment: optional simulated resources. n_round: number of iterations. add_human: optional human participant.
Prerequisite Setup
Install Anaconda to manage Python virtual environments and PyCharm as the IDE.
LLM Configuration Options
Official GPT APIs (OpenAI, Azure, etc.).
Chinese LLMs (Tongyi Qianwen, Wenxin Yiyan, Baidu Qianfan, iFlytek Spark).
Local open‑source models via Ollama.
Project Initialization
Clone the source code: https://github.com/NanGePlus/MetaGPTTest (or the Gitee mirror).
Create a PyCharm project named MetaGPTTest and set up a virtual Python environment.
Copy the downloaded repository files into the project directory.
Install required packages:
pip install metagpt==0.8.1 asyncio==3.4.3LLM Configuration File
Generate a configuration file with metagpt --init-config (creates ~/.metagpt/config2.yaml) or create config/config2.yaml manually. Example for GPT‑4o‑mini:
llm:
api_type: "openai"
model: "gpt-4o-mini"
base_url: "https://yunwu.ai/v1"
api_key: "YOUR_API_KEY"The system reads configuration in the order ~/.metagpt/config2.yaml > config/config2.yaml.
Running the Workflow
Execute the scripts located in the nangeAGICode directory. If the error
AsyncClient.init() got an unexpected keyword argument 'proxies'appears, upgrade httpx: pip install --upgrade httpx==0.27.2 After updating dependencies, re‑run the scripts to verify that agents produce the expected outputs.
Version and Installation
The current stable release is v0.8.2 . To install the latest development version directly from source:
pip install git+https://github.com/geekan/MetaGPTArchitect's Alchemy Furnace
A comprehensive platform that combines Java development and architecture design, guaranteeing 100% original content. We explore the essence and philosophy of architecture and provide professional technical articles for aspiring architects.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
