Uncertainty-Aware Attention Heads: Efficient Unsupervised Uncertainty Quantification for LLMs

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing uncertainty quantification (UQ) methods for large language models (LLMs) suffer from high computational overhead or reliance on supervised signals, limiting their practicality in mitigating hallucination. Method: We propose RAUQ—a fully unsupervised, single-forward, sequence-level UQ method. RAUQ is the first to empirically identify and exploit a spontaneous decay pattern in “uncertainty-aware” attention heads of Transformers, wherein these heads progressively diminish attention to preceding tokens. It derives real-time, white-box uncertainty scores via attention-weight analysis, recursive aggregation, and token-level confidence modeling—requiring no labels, fine-tuning, or auxiliary training. Results: RAUQ achieves state-of-the-art performance across 12 diverse tasks on four mainstream LLMs, with computational overhead under 1% of inference latency. It exhibits strong generalization and zero-label dependency, enabling plug-and-play deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit impressive fluency, but often produce critical errors known as"hallucinations". Uncertainty quantification (UQ) methods are a promising tool for coping with this fundamental shortcoming. Yet, existing UQ methods face challenges such as high computational overhead or reliance on supervised learning. Here, we aim to bridge this gap. In particular, we propose RAUQ (Recurrent Attention-based Uncertainty Quantification), an unsupervised approach that leverages intrinsic attention patterns in transformers to detect hallucinations efficiently. By analyzing attention weights, we identified a peculiar pattern: drops in attention to preceding tokens are systematically observed during incorrect generations for certain"uncertainty-aware"heads. RAUQ automatically selects such heads, recurrently aggregates their attention weights and token-level confidences, and computes sequence-level uncertainty scores in a single forward pass. Experiments across 4 LLMs and 12 question answering, summarization, and translation tasks demonstrate that RAUQ yields excellent results, outperforming state-of-the-art UQ methods using minimal computational overhead (<1% latency). Moreover, it requires no task-specific labels and no careful hyperparameter tuning, offering plug-and-play real-time hallucination detection in white-box LLMs.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised uncertainty quantification for LLMs
Detect hallucinations using attention patterns
Minimal computational overhead in real-time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised uncertainty quantification via attention patterns
Recurrent aggregation of attention weights for efficiency
Plug-and-play real-time hallucination detection
🔎 Similar Papers
No similar papers found.