Can Multi‑Agent Collaboration Automatically Tune Database Parameters with High Efficiency?
The paper presents CMA+DB, a hierarchical multi‑agent framework that automatically tunes database parameters across diverse workloads by combining classification‑based collaboration, layered training, and joint action selection, achieving superior performance, faster convergence, and strong generalization compared with existing tuning methods.
Research Background and Challenges
Modern distributed and cloud environments generate diverse database workloads such as high‑concurrency OLTP (TPC‑C), random read‑write cloud services (YCSB), and real‑time social interactions (Twitter). Traditional tuning methods struggle: manual DBA tuning is time‑consuming and cannot keep up with dynamic workloads; heuristic search explodes in high‑dimensional parameter spaces; Bayesian optimization requires manual selection of key parameters; existing single‑agent reinforcement‑learning approaches provide only coarse‑grained adjustments, limiting precision and generalization.
Core Innovation: CMA+DB Multi‑Agent Collaboration Framework
The CMA+DB framework introduces a three‑level hierarchical training mechanism and a classification‑based collaboration strategy. It integrates three sub‑models:
SAPM (Single‑Agent Pre‑training Model) – each agent specializes in a group of functionally similar parameters and learns their impact on performance in isolation.
MATM (Multi‑Agent Joint‑training Model) – agents combine their predicted actions into a unified configuration, ensuring no duplicate parameters and encouraging cross‑agent adaptation.
PJTM (Probabilistic Joint‑action Selection Model) – assigns a selection factor P_i to each agent; low‑impact agents are down‑weighted to reduce negative interference.
Training Stages
Stage 1 – SAPM : Agents explore their assigned parameter subset independently, biasing the neural network toward adjusting important parameters and identifying key knobs without mutual interference.
Stage 2 – MATM : Predicted actions from all agents are merged into a recommended configuration set. The joint‑training process forces agents to adapt to each other’s behavior, increasing the number of tunable parameters and model expressiveness.
Stage 3 – PJTM : After joint training, each agent receives a probabilistic weight P_i. Agents with negligible performance impact receive lower weights, mitigating their influence on the final recommendation.
Algorithmic Foundations
The framework builds on deep deterministic policy gradient (DDPG) for single‑agent learning and multi‑agent DDPG (MADDPG) for coordinated learning. An actor‑critic architecture provides separate observation and action spaces for each agent, enabling continuous interaction with the database environment and iterative policy improvement.
Experimental Validation
Experiments were conducted on PostgreSQL using three representative workloads (TPC‑C, YCSB, Twitter). Compared with mainstream tuners (e.g., OtterTune, CDBTune+), CMA+DB consistently achieved:
Faster convergence, especially under high‑concurrency TPC‑C scenarios, reducing tuning time and adaptation cost.
Higher throughput and significantly lower latency, maintaining stable performance under peak loads.
Strong generalization across workloads and parameter scales without manual re‑engineering.
Summary and Outlook
CMA+DB delivers an automated, precise, and low‑overhead solution for database parameter optimization, lowering the barrier for performance tuning. Future work will focus on reducing computational complexity for ultra‑large‑scale parameter spaces and extending the framework to other DBMSs such as MySQL and OceanBase, aiming to broaden its applicability in finance, e‑commerce, and social platforms.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
