The Consistency Hypothesis in Uncertainty Quantification for Large Language Models

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates whether output consistency serves as a valid proxy for confidence estimation in large language models (LLMs), i.e., whether the “consistency hypothesis” holds. Method: We formally define and empirically evaluate three distinct consistency hypotheses across diverse tasks—including question answering, summarization, and Text-to-SQL—using rigorous statistical hypothesis testing to assess their generality and robustness. Among them, the “Sim-Any” hypothesis—based on cosine similarity between any two sampled outputs—demonstrates superior robustness and practicality. Leveraging this finding, we propose a training-free, black-box uncertainty quantification (UQ) method that requires only API calls and similarity aggregation, without access to model internals or fine-tuning. Contribution/Results: Evaluated on eight benchmark datasets, our approach significantly outperforms existing black-box UQ baselines, achieving substantial improvements in confidence calibration performance.

Technology Category

Application Category

📝 Abstract
Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence, an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the `Sim-Any' hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.
Problem

Research questions and friction points this paper is trying to address.

Evaluating confidence in large language model outputs
Testing consistency hypothesis for uncertainty quantification
Developing black-box methods for confidence estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalize consistency hypothesis for LLM confidence
Introduce statistical tests for hypothesis variations
Propose data-free black-box UQ methods
🔎 Similar Papers
No similar papers found.