MageSQL: Enhancing In-context Learning for Text-to-SQL Applications with Large Language Models

๐Ÿ“… 2025-04-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses two key challenges in natural language-to-SQL (NL2SQL) translation: insufficient contextual modeling and difficulty in correcting SQL generation errors. To tackle these, we propose an LLM-centric enhancement framework. Our method introduces: (1) a novel graph contrastive learningโ€“based, SQL-aware few-shot example selection mechanism, which leverages graph neural networks to explicitly model SQL syntactic and semantic structures, thereby improving the quality of in-context learning; and (2) an interpretable SQL error detection and correction module that synergistically integrates prompt engineering, data augmentation, and post-hoc error rectification strategies. Evaluated on multiple standard benchmarks, our approach consistently outperforms state-of-the-art models, achieving average improvements of 5.2โ€“9.7 percentage points in execution accuracy. The results demonstrate substantial gains in both the natural language interface capability and robustness of database systems.

Technology Category

Application Category

๐Ÿ“ Abstract
The text-to-SQL problem aims to translate natural language questions into SQL statements to ease the interaction between database systems and end users. Recently, Large Language Models (LLMs) have exhibited impressive capabilities in a variety of tasks, including text-to-SQL. While prior works have explored various strategies for prompting LLMs to generate SQL statements, they still fall short of fully harnessing the power of LLM due to the lack of (1) high-quality contextual information when constructing the prompts and (2) robust feedback mechanisms to correct translation errors. To address these challenges, we propose MageSQL, a text-to-SQL approach based on in-context learning over LLMs. MageSQL explores a suite of techniques that leverage the syntax and semantics of SQL queries to identify relevant few-shot demonstrations as context for prompting LLMs. In particular, we introduce a graph-based demonstration selection method -- the first of its kind in the text-to-SQL problem -- that leverages graph contrastive learning adapted with SQL-specific data augmentation strategies. Furthermore, an error correction module is proposed to detect and fix potential inaccuracies in the generated SQL query. We conduct comprehensive evaluations on several benchmarking datasets. The results show that our proposed methods outperform state-of-the-art methods by an obvious margin.
Problem

Research questions and friction points this paper is trying to address.

Improving natural language to SQL translation accuracy
Enhancing context quality for LLM-based text-to-SQL systems
Developing error correction for generated SQL queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based demonstration selection for context
SQL-specific data augmentation strategies
Error correction module for SQL accuracy
๐Ÿ”Ž Similar Papers
No similar papers found.