Language Model Uncertainty Quantification with Attention Chain

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit overconfident uncertainty estimates in multi-step reasoning tasks, undermining reliability and trustworthiness. Method: This paper proposes an efficient marginalization-based uncertainty quantification (UQ) method grounded in the “attention chain”—a novel mechanism that backtracks attention weights to identify semantically critical tokens, followed by probability-threshold pruning to construct compact, interpretable reasoning paths. Uncertainty calibration is achieved via chained marginal probability approximation, avoiding costly sampling. Contribution/Results: The approach reduces computational overhead by an order of magnitude compared to standard sampling-based UQ methods. Extensive experiments across multiple reasoning benchmarks demonstrate significant mitigation of overconfidence and marked improvements in UQ reliability. By integrating interpretability with efficiency, this work establishes a lightweight, transparent paradigm for uncertainty modeling in LLM-based reasoning—advancing the foundation for trustworthy AI inference.

Technology Category

Application Category

📝 Abstract
Accurately quantifying a large language model's (LLM) predictive uncertainty is crucial for judging the reliability of its answers. While most existing research focuses on short, directly answerable questions with closed-form outputs (e.g., multiple-choice), involving intermediate reasoning steps in LLM responses is increasingly important. This added complexity complicates uncertainty quantification (UQ) because the probabilities assigned to answer tokens are conditioned on a vast space of preceding reasoning tokens. Direct marginalization is infeasible, and the dependency inflates probability estimates, causing overconfidence in UQ. To address this, we propose UQAC, an efficient method that narrows the reasoning space to a tractable size for marginalization. UQAC iteratively constructs an"attention chain"of tokens deemed"semantically crucial"to the final answer via a backtracking procedure. Starting from the answer tokens, it uses attention weights to identify the most influential predecessors, then iterates this process until reaching the input tokens. Similarity filtering and probability thresholding further refine the resulting chain, allowing us to approximate the marginal probabilities of the answer tokens, which serve as the LLM's confidence. We validate UQAC on multiple reasoning benchmarks with advanced open-source LLMs, demonstrating that it consistently delivers reliable UQ estimates with high computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Quantify predictive uncertainty in LLMs for complex reasoning tasks
Address overconfidence from token dependencies in uncertainty estimation
Efficiently marginalize probabilities across large reasoning token spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

UQAC narrows reasoning space for marginalization
Attention chain identifies semantically crucial tokens
Similarity filtering refines confidence estimates
🔎 Similar Papers
No similar papers found.
Yinghao Li
Yinghao Li
Applied Scientist, AWS
NLP
Rushi Qiang
Rushi Qiang
Tsinghua University, GaTech
Machine LearningAgents
L
Lama Moukheiber
Georgia Institute of Technology, Atlanta, USA
C
Chao Zhang
Georgia Institute of Technology, Atlanta, USA