NL2SQL-BUGs: A Benchmark for Detecting Semantic Errors in NL2SQL Translation

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing NL2SQL research lacks a dedicated benchmark for detecting semantic errors in natural-language-to-SQL translation. Method: We introduce NL2SQL-BUGs, the first benchmark specifically designed for semantic deviation identification, comprising 2,018 expert-annotated samples spanning nine error categories and 31 fine-grained subtypes. We propose an original two-tier semantic error taxonomy, integrating schema-aware semantic analysis with collaborative evaluation across multiple large language models (GPT-4, Claude, Llama). Contribution/Results: Experiments reveal that state-of-the-art LLMs achieve only 75.16% accuracy in semantic error detection—exposing a critical robustness gap. Furthermore, our benchmark enables the correction of 106 historical annotation errors in the BIRD dataset. This work fills a fundamental gap in NL2SQL semantic robustness evaluation and demonstrates NL2SQL-BUGs’ practical value in enhancing both system reliability and annotation quality.

Technology Category

Application Category

📝 Abstract
Natural Language to SQL (i.e., NL2SQL) translation is crucial for democratizing database access, but even state-of-the-art models frequently generate semantically incorrect SQL queries, hindering the widespread adoption of these techniques by database vendors. While existing NL2SQL benchmarks primarily focus on correct query translation, we argue that a benchmark dedicated to identifying common errors in NL2SQL translations is equally important, as accurately detecting these errors is a prerequisite for any subsequent correction-whether performed by humans or models. To address this gap, we propose NL2SQL-BUGs, the first benchmark dedicated to detecting and categorizing semantic errors in NL2SQL translation. NL2SQL-BUGs adopts a two-level taxonomy to systematically classify semantic errors, covering 9 main categories and 31 subcategories. The benchmark consists of 2018 expert-annotated instances, each containing a natural language query, database schema, and SQL query, with detailed error annotations for semantically incorrect queries. Through comprehensive experiments, we demonstrate that current large language models exhibit significant limitations in semantic error detection, achieving an average detection accuracy of only 75.16%. Despite this, the models were able to successfully detect 106 errors (accounting for 6.91%) in the widely-used NL2SQL dataset, BIRD, which were previously annotation errors in the benchmark. This highlights the importance of semantic error detection in NL2SQL systems.
Problem

Research questions and friction points this paper is trying to address.

Detecting semantic errors in NL2SQL translation.
Creating a benchmark for NL2SQL error categorization.
Evaluating large language models' error detection accuracy.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces NL2SQL-BUGs benchmark for semantic error detection
Uses two-level taxonomy for classifying semantic errors
Demonstrates limitations of large language models in error detection
🔎 Similar Papers
No similar papers found.