Do LLMs Align with My Task? Evaluating Text-to-SQL via Dataset Alignment

📅 2025-10-06
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how structural alignment between training data and target queries—measured via SQL syntactic tree topology and operator distribution—affects supervised fine-tuning (SFT) performance in neural text-to-SQL (NL2SQL). We propose a training-free metric to quantify structural alignment, enabling prior estimation of dataset suitability for SFT. Experiments across three cross-domain NL2SQL benchmarks—using multiple large language model families—demonstrate that the metric strongly predicts downstream execution accuracy and SQL generation quality: high alignment consistently yields substantial gains, whereas low alignment delivers marginal improvement. This study is the first to systematically establish structural alignment as a decisive factor governing SFT success in NL2SQL. Moreover, it introduces an interpretable, reusable evaluation framework for alignment-aware data selection and construction—bridging a critical gap between data curation and model adaptation in semantic parsing.

Technology Category

Application Category

📝 Abstract
Supervised Fine-Tuning (SFT) is an effective method for adapting Large Language Models (LLMs) on downstream tasks. However, variability in training data can hinder a model's ability to generalize across domains. This paper studies the problem of dataset alignment for Natural Language to SQL (NL2SQL or text to SQL), examining how well SFT training data matches the structural characteristics of target queries and how this alignment impacts model performance. We hypothesize that alignment can be accurately estimated by comparing the distributions of structural SQL features across the training set, target data, and the model's predictions prior to SFT. Through comprehensive experiments on three large cross-domain NL2SQL benchmarks and multiple model families, we show that structural alignment is a strong predictor of fine-tuning success. When alignment is high, SFT yields substantial gains in accuracy and SQL generation quality; when alignment is low, improvements are marginal or absent. These findings highlight the importance of alignment-aware data selection for effective fine-tuning and generalization in NL2SQL tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating dataset alignment impact on text-to-SQL model generalization
Measuring structural SQL feature distribution alignment across datasets
Assessing alignment as predictor for fine-tuning success in NL2SQL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates dataset alignment via structural SQL features
Uses cross-domain benchmarks for comprehensive experiments
Proposes alignment-aware data selection for fine-tuning
🔎 Similar Papers
No similar papers found.
Davood Rafiei
Davood Rafiei
Professor of Computer Science, Univeristy of Alberta
DatabasesLLMsWeb IRData Preparation
M
Morgan Lindsay Heisler
Huawei Tech. Canada, Vancouver, BC, Canada
W
Weiwei Zhang
Huawei Tech. Canada, Vancouver, BC, Canada
Mohammadreza Pourreza
Mohammadreza Pourreza
Researcher at google
NLPInformation retrieval
Y
Yong Zhang
Huawei Tech. Canada, Vancouver, BC, Canada