Configure OpenClaw Multi‑Agent: GLM‑5, Kimi K2.5, DeepSeek & GLM‑Flash Team
This step‑by‑step tutorial shows how to integrate domestic LLM providers (GLM‑5, GLM‑4.7, GLM‑Flash, Kimi K2.5, DeepSeek, Qwen3‑Coder‑Next, BGE‑M3) into OpenClaw, define model routing, create dedicated controller, writer and coder agents, and run a complete multi‑agent workflow.
Overall Approach: Model → Role → Routing
The recommended configuration order is to first add model providers in openclaw.json, then define role‑specific routing, and finally create Agent definitions in workspace/AGENTS.md.
1. Adding Model Providers
Environment variables for API keys are declared in the env section of openclaw.json. Example:
{
env: {
// ZhiPu GLM (GLM‑5 / GLM‑4.7 / Flash)
"ZAI_API_KEY": "sk-xxx",
// DeepSeek
"DEEPSEEK_API_KEY": "sk-xxx",
// Kimi (Moonshot)
"MOONSHOT_API_KEY": "sk-xxx",
// Local model via Ollama / LM Studio
"OLLAMA_API_KEY": "ollama-local"
},
// other config …
}Providers are merged under the models block. The JSON5‑style snippet below registers three cloud providers and their model IDs:
{
env: { /* see above */ },
models: {
mode: "merge",
providers: {
// 1) ZhiPu GLM: controller + fallback
zai: {
apiKey: "${ZAI_API_KEY}",
api: "zai-api",
models: [
{ id: "zai/glm-5", contextWindow: 200000, maxTokens: 8192 },
{ id: "zai/glm-4.7", contextWindow: 128000, maxTokens: 8192 },
{ id: "zai/glm-4.7-flash", contextWindow: 128000, maxTokens: 8192 }
]
},
// 2) DeepSeek: programming mainstay
deepseek: {
baseUrl: "https://api.deepseek.com/v1",
apiKey: "${DEEPSEEK_API_KEY}",
api: "openai-completions",
models: [
{ id: "deepseek-chat", contextWindow: 128000, maxTokens: 8192 },
{ id: "deepseek-reasoner", contextWindow: 128000, maxTokens: 8192 }
]
},
// 3) Kimi (Moonshot): long‑form writing
moonshot: {
baseUrl: "https://api.moonshot.cn/v1",
apiKey: "${MOONSHOT_API_KEY}",
api: "openai-completions",
models: [
{ id: "kimi-k2.5", contextWindow: 256000, maxTokens: 8192 }
]
}
}
}
}2. Defining Model Routing per Role
The default controller Agent uses GLM‑5 as the primary model with a three‑level fallback chain (GLM‑4.7 → GLM‑Flash → DeepSeek):
{
agents: {
defaults: {
model: {
primary: "zai/glm-5",
fallbacks: [
"zai/glm-4.7",
"zai/glm-4.7-flash",
"deepseek/deepseek-chat"
]
}
}
}
}Writer Agent (Kimi K2.5) falls back to GLM‑5 and GLM‑Flash:
{
agents: {
defaults: { /* same as above */ },
writer: {
description: "Long‑form writing / industry analysis / tutorial generation",
model: {
primary: "moonshot/kimi-k2.5",
fallbacks: ["zai/glm-5", "zai/glm-4.7-flash"]
}
}
}
}Coder Agent (DeepSeek) falls back to GLM‑5:
{
agents: {
defaults: { /* same as above */ },
writer: { /* same as above */ },
coder: {
description: "Code generation / refactoring / debugging / patch‑level edits",
model: {
primary: "deepseek/deepseek-chat",
fallbacks: ["zai/glm-5"]
}
}
}
}3. Agent Definitions in AGENTS.md
A minimal multi‑Agent file looks like this:
# AGENTS.md - Multi‑Model Multi‑Role Configuration
## controller
- id: controller
- description: Main controller, handles task decomposition, tool calls, overall flow
- model: uses `agents.defaults.model` (GLM‑5 + fallback chain)
### Usage suggestions
- default for private or group chats
- suitable for complex tasks, automated workflows, tool orchestration
---
## writer
- id: writer
- description: Long‑form writing assistant
- model: `agents.writer.model` (Kimi K2.5 primary)
### Usage suggestions
- `/agent writer` to switch to writing mode
- ideal for tutorials, industry analysis, reports
---
## coder
- id: coder
- description: Programming assistant
- model: `agents.coder.model` (DeepSeek primary)
### Usage suggestions
- `/agent coder` for code generation, refactoring, debugging
---
## local-coder
- id: local-coder
- description: Local code assistant for private repositories and offline tasks
- model: `agents["local-coder"].model` (local Qwen3‑Coder‑Next)
### Usage suggestions
- `/agent local-coder` to analyze private code or work offline4. Adding a New Agent
To add a new role, add one line in openclaw.json under agents.<id> and a corresponding description block in AGENTS.md. The example adds a local-coder that uses the locally hosted Qwen3‑Coder‑Next model.
5. Integrating Local Models and Vector Store
Local model Qwen3‑Coder‑Next can be added as a local provider (via Ollama or LM Studio):
{
models: {
mode: "merge",
providers: {
// other providers …
local: {
baseUrl: "http://127.0.0.1:1234/v1",
apiKey: "${OLLAMA_API_KEY}",
api: "openai-responses",
models: [
{ id: "qwen3-coder-next", contextWindow: 32768, maxTokens: 8192 }
]
}
}
}
}BGE‑M3 is used as an embedding model for memory retrieval and knowledge‑base lookup. Its ID is referenced in the Memory / search configuration of OpenClaw.
6. End‑to‑End Task Example
For a weekly report with code example, the controller (GLM‑5) breaks the task into outline creation, writer (Kimi) drafts the long article, coder (DeepSeek) generates the code snippet, and fallback models handle any failures. The user simply chats with the controller or issues /agent writer / /agent coder commands.
7. Summary
The key takeaway is that you are not picking a single “best” domestic model; you are assembling a team of models—each placed in the role where it excels—by configuring openclaw.json for routing and documenting responsibilities in AGENTS.md. Following the steps above yields a ready‑to‑run OpenClaw workspace with a clear division of labor among GLM‑5, Kimi K2.5, DeepSeek, GLM‑Flash, Qwen3‑Coder‑Next and BGE‑M3.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Frontend AI Walk
Looking for a one‑stop platform that deeply merges frontend development with AI? This community focuses on intelligent frontend tech, offering cutting‑edge insights, practical implementation experience, toolchain innovations, and rich content to help developers quickly break through in the AI‑driven frontend era.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
