🤖 AI Summary
Database configuration tuning faces challenges in navigating high-dimensional continuous/discrete parameter spaces and incurring substantial overhead due to repeated workload replay. This paper proposes the first end-to-end generative language model (GLM)-driven tuning method, directly mapping SQL workloads to high-performance configuration parameter sequences without iterative execution. We introduce an automated synthetic data generation framework enabling workload-to-configuration sequence-to-sequence modeling, coupled with database performance-aware fine-tuning strategies. Evaluated on ten standard and three real-world benchmarks, our approach consistently outperforms state-of-the-art methods: on the JOB benchmark, it achieves optimal configurations 24× faster than prior work, while delivering higher throughput and lower latency.
📝 Abstract
Database knob tuning is a significant challenge for database administrators (DBAs), as it involves tuning a large number of configuration knobs with continuous or discrete values to achieve optimal database performance. Traditional methods, such as manual tuning or learning-based approaches, typically require numerous workload replays and are both time-consuming and resource-intensive. To address this challenge, we introduce E2ETune, an end-to-end knob tuner powered by a fine-tuned generative language model. The key idea is to leverage the exceptional sequence-to-sequence modeling capabilities of generative language models to capture the complex mapping between workloads (inputs) and their corresponding promising configurations (outputs). To achieve this goal, we propose a novel data generation framework designed to efficiently and automatically produce a vast quantity of training data, where each data sample consists of apair. Then, these synthetic data are used to fine-tune a generative language model, yielding an end-to-end knob tuner named E2ETune. This tuner can directly recommend promising configurations for any new workload, eliminating the need for the extensive workload replays required by previous approaches. We have conducted extensive experiments to evaluate E2ETune's effectiveness and efficiency, utilizing 10 representative benchmarks and 3 real-world benchmarks. Compared to state-of-the-art methods, E2ETune demonstrates a significantly faster ability to identify superior configurations, achieving higher throughput or lower latency. For example, with the challenging JOB benchmark, E2ETune finds the best-performing configuration in an average of 24x less time compared to existing approaches.