🤖 AI Summary
Existing index recommendation approaches suffer from low efficiency (heuristic methods), poor generalization (supervised learning models), or insufficient accuracy and high computational overhead (current LLM-based methods). To address these limitations, this paper proposes a zero-shot multi-agent large language model framework. It decomposes index recommendation into five collaborative agents—planning, selection, composition, correction, and reflection—enabling coordinated global control and localized execution. Leveraging task decomposition and zero-shot prompting, the framework adapts to diverse workloads and schemas without fine-tuning or domain-specific training data. Experimental results demonstrate state-of-the-art performance in both recommendation quality and inference efficiency, significantly outperforming traditional heuristics, supervised learning baselines, and single-agent prompt-engineering approaches. The framework achieves strong generalization across heterogeneous database environments, high automation, and low deployment cost.
📝 Abstract
Index recommendation is one of the most important problems in database management system (DBMS) optimization. Given queries and certain index-related constraints, traditional methods rely on heuristic optimization or learning-based models to select effective indexes and improve query performance. However, heuristic optimization suffers from high computation time, and learning-based models lose generalisability due to training for different workloads and database schemas. With the recent rapid development of large language models (LLMs), methods using prompt tuning have been proposed to enhance the efficiency of index selection. However, such methods still can not achieve the state-of-the-art (SOTA) results, and preparing the index selection demonstrations is also resource-intensive. To address these issues, we propose MAAdvisor, a zero-shot LLM-based index advisor with a multi-agent framework. We decompose the index recommendation problem into sub-steps, including planning, selection, combination, revision, and reflection. A set of LLM-embedded agents is designed to handle each one of the different sub-steps. Our method utilizes global agents to control the index selection process and local agents to select and revise indexes. Through extensive experiments, we show that our proposed MAAdvisor not only achieves the SOTA performance compared to the heuristic methods, but also outperforms learning-based and prompt-based methods with higher efficiency and better zero-shot inference ability.