đ¤ AI Summary
This work investigates how structural alignment between training data and target queriesâmeasured via SQL syntactic tree topology and operator distributionâaffects supervised fine-tuning (SFT) performance in neural text-to-SQL (NL2SQL). We propose a training-free metric to quantify structural alignment, enabling prior estimation of dataset suitability for SFT. Experiments across three cross-domain NL2SQL benchmarksâusing multiple large language model familiesâdemonstrate that the metric strongly predicts downstream execution accuracy and SQL generation quality: high alignment consistently yields substantial gains, whereas low alignment delivers marginal improvement. This study is the first to systematically establish structural alignment as a decisive factor governing SFT success in NL2SQL. Moreover, it introduces an interpretable, reusable evaluation framework for alignment-aware data selection and constructionâbridging a critical gap between data curation and model adaptation in semantic parsing.
đ Abstract
Supervised Fine-Tuning (SFT) is an effective method for adapting Large Language Models (LLMs) on downstream tasks. However, variability in training data can hinder a model's ability to generalize across domains. This paper studies the problem of dataset alignment for Natural Language to SQL (NL2SQL or text to SQL), examining how well SFT training data matches the structural characteristics of target queries and how this alignment impacts model performance. We hypothesize that alignment can be accurately estimated by comparing the distributions of structural SQL features across the training set, target data, and the model's predictions prior to SFT. Through comprehensive experiments on three large cross-domain NL2SQL benchmarks and multiple model families, we show that structural alignment is a strong predictor of fine-tuning success. When alignment is high, SFT yields substantial gains in accuracy and SQL generation quality; when alignment is low, improvements are marginal or absent. These findings highlight the importance of alignment-aware data selection for effective fine-tuning and generalization in NL2SQL tasks.