🤖 AI Summary
To address the challenges of manual hyperparameter tuning and static configuration in learned index structures (LIS)—which hinder adaptability to dynamic data distributions and online workloads—this paper proposes LITune, an end-to-end automated tuning framework. Methodologically, LITune introduces (1) the first adaptive training pipeline integrating deep reinforcement learning (DRL), enabling co-evolution of indexing policies and data distribution estimates; and (2) a lightweight online update mechanism, O², that models state transitions and enables millisecond-scale policy adaptation. Experimental results demonstrate that, compared to default configurations, LITune reduces query latency by 98% and increases throughput by 17×. Evaluated under realistic production workloads, it significantly enhances the practicality, robustness, and deployment efficiency of LIS.
📝 Abstract
Learned Index Structures (LIS) have significantly advanced data management by leveraging machine learning models to optimize data indexing. However, designing these structures often involves critical trade-offs, making it challenging for both designers and end-users to find an optimal balance tailored to specific workloads and scenarios. While some indexes offer adjustable parameters that demand intensive manual tuning, others rely on fixed configurations based on heuristic auto-tuners or expert knowledge, which may not consistently deliver optimal performance. This paper introduces LITune, a novel framework for end-to-end automatic tuning of Learned Index Structures. LITune employs an adaptive training pipeline equipped with a tailor-made Deep Reinforcement Learning (DRL) approach to ensure stable and efficient tuning. To accommodate long-term dynamics arising from online tuning, we further enhance LITune with an on-the-fly updating mechanism termed the O2 system. These innovations allow LITune to effectively capture state transitions in online tuning scenarios and dynamically adjust to changing data distributions and workloads, marking a significant improvement over other tuning methods. Our experimental results demonstrate that LITune achieves up to a 98% reduction in runtime and a 17-fold increase in throughput compared to default parameter settings given a selected Learned Index instance. These findings highlight LITune's effectiveness and its potential to facilitate broader adoption of LIS in real-world applications.