Launch a Multi‑Agent AI System in 20 Lines with OxyGent

This guide shows how to quickly build, configure, and deploy modular AI agents using the OxyGent framework—covering environment setup, minimal code initialization, tool integration, multi‑agent orchestration, and advanced deployment techniques—all illustrated with concise examples and screenshots.

JD Tech
JD Tech
JD Tech
Launch a Multi‑Agent AI System in 20 Lines with OxyGent

OxyGent lets developers assemble AI agent systems like building blocks, offering high extensibility and full‑traceable decision making, which helped it achieve top scores on the GAIA (General AI Assistant) Benchmark.

Quick Start with 20 Lines of Code

The article walks through a step‑by‑step example that launches an agent with only about 20 lines of code, covering environment installation, model registration, MCP tool integration, agent registration, visual debugging, multi‑agent collaboration, and distributed deployment.

Environment Installation

Python 3.10+ (conda example):

conda create -n oxy_env python==3.10</code><code>conda activate oxy_env

OxyGent package: pip install oxygent Node.js (required for MCP tools): download from https://nodejs.org/zh-cn and install.

.env Configuration

DEFAULT_LLM_API_KEY = "<large‑model‑key>"</code><code>DEFAULT_LLM_BASE_URL = "<large‑model‑url>"</code><code>DEFAULT_LLM_MODEL_NAME = "<large‑model‑name>"

Start the Agent

After setting up the environment and variables, run the provided start command (illustrated in the accompanying screenshot).

RAG Integration

The framework supports Retrieval‑Augmented Generation, enabling agents to fetch external knowledge during inference.

Tool Integration

Agents can invoke tools via three registration methods:

Local MCP

SSE MCP

FunctionHub

All methods produce the same runtime effect, as shown in the screenshots.

Building Multi‑Agent Systems

Using a block‑based approach, developers can stack multiple agents, create hierarchical agents, combine with workflows, and apply a reflection mechanism for self‑improvement. Visual diagrams illustrate each step.

Fast Deployment

Data persistence for downstream SFT or RL training.

Concurrency limits per node.

Multi‑environment configuration.

Distributed deployment support.

Advanced Usage

Multimodal capabilities.

Weighted memory filtering.

Dynamic tool discovery.

Custom LLM output parsers.

Custom SSE interfaces.

Result post‑processing or formatting.

Simultaneous tool calls.

Task restart from intermediate nodes.

Plan‑and‑Solve paradigm.

These features enable sophisticated agent applications beyond the basic quick‑start example.

AI agentsTool IntegrationdeploymentMulti-agentOxyGent
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.