🤖 AI Summary
High-quality, structurally diverse, and complex Text-to-SQL training data remain scarce, and existing synthetic methods struggle to effectively control the complexity and diversity of SQL structures. This work proposes a structure-aware data synthesis framework that introduces, for the first time, six atomic transformation operators based on SQL abstract syntax trees (ASTs). By leveraging exploratory augmentation and adaptive directed evolution strategies, the framework incrementally enhances query complexity across dimensions such as joins, predicates, aggregations, and nesting. Execution-guided validation and schema-aware deduplication ensure the generated data maintain both high quality and structural diversity. Remarkably, fine-tuning a 7B-parameter model on only 1/18 of the SynSQL dataset synthesized by this method surpasses the performance achieved by training on the full original dataset.
📝 Abstract
Training effective Text-to-SQL models remains challenging due to the scarcity of high-quality, diverse, and structurally complex datasets. Existing methods either rely on limited human-annotated corpora, or synthesize datasets directly by simply prompting LLMs without explicit control over SQL structures, often resulting in limited structural diversity and complexity. To address this, we introduce EvolSQL, a structure-aware data synthesis framework that evolves SQL queries from seed data into richer and more semantically diverse forms. EvolSQL starts with an exploratory Query-SQL expansion to broaden question diversity and improve schema coverage, and then applies an adaptive directional evolution strategy using six atomic transformation operators derived from the SQL Abstract Syntax Tree to progressively increase query complexity across relational, predicate, aggregation, and nesting dimensions. An execution-grounded SQL refinement module and schema-aware deduplication further ensure the creation of high-quality, structurally diverse mapping pairs. Experimental results show that a 7B model fine-tuned on our data outperforms one trained on the much larger SynSQL dataset using only 1/18 of the data.