GPTuner: LLM-Driven PostgreSQL Knob Tuning
GPTuner, an LLM‑driven system for PostgreSQL knob tuning developed by researchers at Sichuan University, demonstrates that knowledge processing, parameter selection, search‑range optimization, and a two‑stage Bayesian framework each significantly improve performance, while costing roughly 880 000 GPT‑4 tokens (≈ $30) with reusable knowledge.
Authors: Lao Jiale, Wang Yibo, Li Yufeng. Affiliation: Sichuan University Intelligent Systems Laboratory.
Ablation Experiments: After evaluating GPTuner's actual capabilities, researchers further investigated the influence of each module on GPTuner's performance. Four sets of ablation experiments verified the impact of knowledge processing, parameter selection, parameter search range optimization, and the two-stage Bayesian optimization framework on GPTuner's performance.
Cost Analysis: Finally, due to the overhead of API scheduling, we analyzed the cost. When using GPT-4 as the language model, GPTuner consumed approximately 880,000 tokens (about 30 USD) to process the 60 knobs of PostgreSQL. It is worth noting that the constructed knowledge can be reused repeatedly, avoiding the tedious overhead of repeated construction.
For more details, please refer to the paper: https://web1.arxiv.org/abs/2311.03157
Code repository: https://github.com/SolidLao/GPTuner
Sohu Tech Products
A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.