Can Large Language Models Be Query Optimizer for Relational Databases?

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Can large language models (LLMs) replace traditional relational database query optimizers? This paper proposes LLM-QO, the first framework enabling end-to-end, autoregressive execution plan generation on PostgreSQL without explicit enumeration. To bridge the gap between general-purpose LLMs and database optimization, we introduce QInstruct—a novel data curation recipe—and a two-stage fine-tuning paradigm comprising QIT (instruction tuning) and QDPO (direct preference optimization). We further propose a metadata textualization scheme to encode database schema and statistics, ensuring seamless integration with PostgreSQL’s execution engine. Experiments across three diverse query workloads demonstrate that LLM-QO consistently generates syntactically valid and cost-efficient plans, outperforming both PostgreSQL’s native optimizer and state-of-the-art learned optimizers. Our approach breaks away from conventional cost-model-driven enumeration paradigms, establishing a new foundation for LLM-based query optimization.

Technology Category

Application Category

📝 Abstract
Query optimization, which finds the optimized execution plan for a given query, is a complex planning and decision-making problem within the exponentially growing plan space in database management systems (DBMS). Traditional optimizers heavily rely on a certain cost model constructed by various heuristics and empirical tuning, probably leading to generating suboptimal plans. Recent developments of Large Language Models (LLMs) have demonstrated their potential in solving complex planning and decision-making problems, such as arithmetic and programmatic tasks. In this paper, we try to explore the potential of LLMs in handling query optimization and propose a tentative LLM-based query optimizer dubbed LLM-QO, established on PostgreSQL's execution engine. In LLM-QO, we formulate query optimization in an autoregressive fashion which directly generates the execution plan without explicit plan enumeration. To investigate the essential input of LLM-QO, we design a customized data recipe named QInstruct to collect the training data from various optimizers and serialize the database's meta data, queries and corresponding plans into a textual format. Based on QInstruct, we implement a two-stage fine-tuning pipeline, Query Instruction Tuning (QIT) and Query Direct Preference Optimization (QDPO), to empower the capability of general-purpose LLMs in handling query optimization. In our experiments, LLM-QO can generate valid and high-quality plans and consistently outperforms both traditional and learned optimizers on three query workloads. Our findings verify that LLMs can be derived as query optimizers where generalization, efficiency and adaptivity deserve further research efforts.
Problem

Research questions and friction points this paper is trying to address.

Explore LLMs for relational database query optimization.
Develop LLM-QO optimizer for PostgreSQL execution engine.
Evaluate LLM-QO outperforming traditional optimizers in workloads.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for query optimization
Autoregressive execution plan generation
Customized data recipe QInstruct