🤖 AI Summary
This paper investigates whether output consistency serves as a valid proxy for confidence estimation in large language models (LLMs), i.e., whether the “consistency hypothesis” holds. Method: We formally define and empirically evaluate three distinct consistency hypotheses across diverse tasks—including question answering, summarization, and Text-to-SQL—using rigorous statistical hypothesis testing to assess their generality and robustness. Among them, the “Sim-Any” hypothesis—based on cosine similarity between any two sampled outputs—demonstrates superior robustness and practicality. Leveraging this finding, we propose a training-free, black-box uncertainty quantification (UQ) method that requires only API calls and similarity aggregation, without access to model internals or fine-tuning. Contribution/Results: Evaluated on eight benchmark datasets, our approach significantly outperforms existing black-box UQ baselines, achieving substantial improvements in confidence calibration performance.
📝 Abstract
Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence, an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the `Sim-Any' hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.