Making Databases Faster with LLM Evolutionary Sampling

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional query optimizers rely on handcrafted heuristics and statistical models, which struggle to capture the semantic relationships between queries and schemas, thereby limiting the effectiveness of physical execution plan optimization. This work proposes a novel approach that serializes execution plans into compact representations and leverages the semantic understanding capabilities of large language models (LLMs) to generate localized editing suggestions. These suggestions are integrated within an evolutionary search framework to iteratively refine candidate plans. By uniquely combining LLMs with evolutionary sampling, the method enables automatic, implicit optimization of execution plans, transcending the constraints of conventional cost models. Experiments on the DBPlanBench framework with the DataFusion engine demonstrate up to a 4.78× speedup on certain queries, and show that optimization strategies discovered in small-scale settings effectively transfer to large-scale databases.

Technology Category

Application Category

📝 Abstract
Traditional query optimization relies on cost-based optimizers that estimate execution cost (e.g., runtime, memory, and I/O) using predefined heuristics and statistical models. Improving these heuristics requires substantial engineering effort, and even when implemented, these heuristics often cannot take into account semantic correlations in queries and schemas that could enable better physical plans. Using our DBPlanBench harness for the DataFusion engine, we expose the physical plan through a compact serialized representation and let the LLM propose localized edits that can be applied and executed. We then apply an evolutionary search over these edits to refine candidates across iterations. Our key insight is that LLMs can leverage semantic knowledge to identify and apply non-obvious optimizations, such as join orderings that minimize intermediate cardinalities. We obtain up to 4.78$\times$ speedups on some queries and we demonstrate a small-to-large workflow in which optimizations found on small databases transfer effectively to larger databases.
Problem

Research questions and friction points this paper is trying to address.

query optimization
cost-based optimizer
semantic correlations
physical plan
database performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based query optimization
evolutionary sampling
physical plan editing
semantic-aware optimization
query performance acceleration
🔎 Similar Papers
No similar papers found.