Calibrating LLMs for Text-to-SQL Parsing by Leveraging Sub-clause Frequencies

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-SQL generation, large language models frequently exhibit “high-confidence errors”—producing incorrect SQL queries while assigning them excessively high confidence scores, thereby undermining reliability assessment. To address this, we introduce the first posterior calibration benchmark tailored to text-to-SQL tasks and propose a novel fine-grained calibration signal—Clause Frequency (CF)—derived from SQL clause structure. We further design Multivariate Platt Scaling (MPS), a method that jointly calibrates model outputs by integrating CF with raw token-level probabilities. Experiments on WikiSQL and Spider demonstrate that MPS significantly reduces Expected Calibration Error (ECE) and improves F1-score for error detection, outperforming both uncalibrated model outputs and standard Platt scaling. Our approach provides a principled, post-hoc uncertainty quantification framework for text-to-SQL systems, enhancing trustworthiness without modifying model architecture or training.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) achieve strong performance on text-to-SQL parsing, they sometimes exhibit unexpected failures in which they are confidently incorrect. Building trustworthy text-to-SQL systems thus requires eliciting reliable uncertainty measures from the LLM. In this paper, we study the problem of providing a calibrated confidence score that conveys the likelihood of an output query being correct. Our work is the first to establish a benchmark for post-hoc calibration of LLM-based text-to-SQL parsing. In particular, we show that Platt scaling, a canonical method for calibration, provides substantial improvements over directly using raw model output probabilities as confidence scores. Furthermore, we propose a method for text-to-SQL calibration that leverages the structured nature of SQL queries to provide more granular signals of correctness, named"sub-clause frequency"(SCF) scores. Using multivariate Platt scaling (MPS), our extension of the canonical Platt scaling technique, we combine individual SCF scores into an overall accurate and calibrated score. Empirical evaluation on two popular text-to-SQL datasets shows that our approach of combining MPS and SCF yields further improvements in calibration and the related task of error detection over traditional Platt scaling.
Problem

Research questions and friction points this paper is trying to address.

Calibrating LLMs for reliable text-to-SQL parsing confidence
Improving uncertainty measures for SQL query correctness likelihood
Leveraging sub-clause frequencies to enhance calibration accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses sub-clause frequency for SQL correctness
Applies multivariate Platt scaling for calibration
Combines SCF and MPS for better accuracy
🔎 Similar Papers