🤖 AI Summary
This work addresses the limited generalization of existing learning-based benefit estimators in online index tuning, which stems from sparse training feedback and workload drift. To overcome these challenges, the authors propose UTune, a novel framework that integrates operator-level learning models with uncertainty quantification, explicitly incorporating uncertainty estimates into the index selection and configuration enumeration process. Furthermore, UTune employs an uncertainty-aware ε-greedy strategy to enable robust and efficient evaluation of index benefits. Experimental results demonstrate that UTune significantly outperforms state-of-the-art methods, achieving faster convergence under stable workloads while simultaneously reducing both query execution time and exploration overhead.
📝 Abstract
There have been a flurry of recent proposals on learned benefit estimators for index tuning. Although these learned estimators show promising improvement over what-if query optimizer calls in terms of the accuracy of estimated index benefit, they face significant limitations when applied to online index tuning, an arguably more common and more challenging scenario in real-world applications. There are two major challenges for learned index benefit estimators in online tuning: (1) limited amount of query execution feedback that can be used to train the models, and (2) constant coming of new unseen queries due to workload drifts. The combination of the two hinders the generalization capability of existing learned index benefit estimators. To overcome these challenges, we present UTune, an uncertainty-aware online index tuning framework that employs operator-level learned models with improved generalization over unseen queries. At the core of UTune is an uncertainty quantification mechanism that characterizes the inherent uncertainty of the operator-level learned models given limited online execution feedback. We further integrate uncertainty information into index selection and configuration enumeration, the key component of any index tuner, by developing a new variant of the classic $\epsilon$-greedy search strategy with uncertainty-weighted index benefits. Experimental evaluation shows that UTune not only significantly improves the workload execution time compared to state-of-the-art online index tuners but also reduces the index exploration overhead, resulting in faster convergence when the workload is relatively stable.