🤖 AI Summary
To address the challenge of fine-grained error localization in SQL queries generated by large language models (LLMs), this paper proposes the first error detection framework based on node-level uncertainty estimation over abstract syntax trees (ASTs). Methodologically, we design a semantics-aware node correctness labeling algorithm and jointly encode schema-aware and lexical features to represent AST nodes, enabling a calibrated supervised classifier to predict per-node error probability. Our key contribution lies in elevating uncertainty modeling from conventional sequence-level to interpretable, structure-robust node-level granularity—thereby supporting precise diagnostic analysis and selective query execution. Experiments across multiple databases demonstrate an average AUC improvement of 27.44% over baselines, confirming strong cross-database generalization capability.
📝 Abstract
We present a practical framework for detecting errors in LLM-generated SQL by estimating uncertainty at the level of individual nodes in the query's abstract syntax tree (AST). Our approach proceeds in two stages. First, we introduce a semantically aware labeling algorithm that, given a generated SQL and a gold reference, assigns node-level correctness without over-penalizing structural containers or alias variation. Second, we represent each node with a rich set of schema-aware and lexical features - capturing identifier validity, alias resolution, type compatibility, ambiguity in scope, and typo signals - and train a supervised classifier to predict per-node error probabilities. We interpret these probabilities as calibrated uncertainty, enabling fine-grained diagnostics that pinpoint exactly where a query is likely to be wrong. Across multiple databases and datasets, our method substantially outperforms token log-probabilities: average AUC improves by +27.44% while maintaining robustness under cross-database evaluation. Beyond serving as an accuracy signal, node-level uncertainty supports targeted repair, human-in-the-loop review, and downstream selective execution. Together, these results establish node-centric, semantically grounded uncertainty estimation as a strong and interpretable alternative to aggregate sequence level confidence measures.