How a Multi‑Agent Framework Supercharges Energy‑Sector AI Tasks
This article explains how the authors built a multi‑agent framework for the energy domain that splits complex tasks into simple subtasks, uses a planner‑scheduler‑executor pipeline, implements structured communication, memory management, and streaming output to overcome large‑model attention diffusion and improve efficiency and reliability.
Introduction
With the rapid development of large language models such as Deepseek and Manus, AI applications have proliferated. To provide 24‑hour professional energy experts for tasks like pre‑risk control, equipment operation, and after‑sale services, the team built a comprehensive energy‑agent platform based on a multi‑agent framework.
Why Multi‑Agent?
Single agents suffer from attention diffusion when handling long prompts, leading to inefficient reasoning. By decomposing complex tasks into simple subtasks and assigning each to a specialized agent, the framework improves efficiency and flexibility.
Framework Overview
The system consists of three stages: planning & orchestration, scheduling & execution, and result aggregation.
Planner
The planner performs intent recognition, splits the task into sub‑tasks, and generates a Plan containing step sequence, target agent, and requirements.
Scheduler & Executor
The scheduler parses the Plan, builds a directed acyclic graph (DAG), and dispatches each step to the appropriate agent via the agent registry. It monitors step status, supports interruption, and stores intermediate results in an ExecutionPlan structure.
{
"planId": "计划ID",
"userQuery": "用户输入内容",
"steps": [{
"seqNo": 0,
"agentName": "智能体名称",
"requirement": "步骤目标",
"status": "步骤状态",
"result": "执行结果"
}],
"context": {
"当前执行内容key": "已识别的上下文信息"
}
}Result Aggregator (Finalizer)
After all steps finish, the finalizer aggregates results and outputs a structured report to the frontend.
Agent Communication Bus
To keep execution results ordered and structured during streaming, the bus wraps outputs in a DSL:
{
"index": "步骤序号",
"content": "流式文本内容",
"agentChatResponse": {
"content": "流式提示信息",
"intent": "结构数据类型",
"data": { /* structured result */ }
}
}This ensures each fragment is associated with its step and can be parsed by the frontend.
Memory Management
Two memory modules are used:
MemoryAdvisor : Enhances large‑model calls by injecting recent UserMessage and relevant AgentChatMemory entries into the prompt.
AgentChatMemory : Stores assistant messages and tool responses, keyed by {agentName}_{userId}_{conversationId} to keep memories isolated per agent.
During a call, MemoryAdvisor retrieves the latest N memories, builds the final prompt, and after the model returns a response, MemoryAdvisor saves the AssistantMessage back into AgentChatMemory.
Controlled Tool Invocation
Automatic tool calling is disabled. The system parses the model’s toolCalls (e.g., OpenAI format), manually invokes the corresponding tool, then creates a ToolResponseMessage with matching tool_call_id and role "tool" before feeding it back to the memory.
{
"tool_call_id": "唯一ID",
"role": "tool",
"content": "工具执行结果"
}Key Takeaways
Minimize reliance on LLMs for deterministic business logic; use workflow agents for clarity and performance.
Make agent execution transparent with detailed logging of plan states, context, and tool interactions.
Understand underlying frameworks (e.g., Spring AI) to avoid pitfalls in streaming, memory handling, and tool‑call integration.
Future Improvements
Introduce a generic workflow orchestration framework to replace hard‑coded pipelines.
Upgrade from sequential execution to DAG‑based concurrent scheduling.
Implement long‑term vector memory retrieval to extend context beyond prompt limits.
References:
Attention mechanisms in LLMs: https://zhuanlan.zhihu.com/p/1890707372265173007
Building effective agents: https://www.anthropic.com/engineering/building-effective-agents
Alibaba Cloud Developer
Alibaba's official tech channel, featuring all of its technology innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
