Diverging Towards Hallucination: Detection of Failures in Vision-Language Models via Multi-token Aggregation

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary vision-language models (VLMs) frequently generate hallucinated objects or unsafe text, yet mainstream hallucination detection methods rely solely on the first-token logit, overlooking rich reliability dynamics embedded in early-generation tokens. To address this, we propose Multi-Token Reliability Estimation (MTRE), the first method to empirically demonstrate that hallucinations often emerge progressively during autoregressive generation. MTRE models the full logit distributions of the first ten tokens via KL divergence and performs white-box logit aggregation using self-attention mechanisms. By capturing temporal reliability patterns across multiple initial tokens, MTRE overcomes the fundamental limitation of single-token detection and enables earlier, more robust hallucination identification. Evaluated on six benchmarks—including MAD-Bench—MTRE achieves AUROC improvements of 9.4–12.1 percentage points over prior work, establishing new state-of-the-art performance for hallucination detection in open-source VLMs.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) now rival human performance on many multimodal tasks, yet they still hallucinate objects or generate unsafe text. Current hallucination detectors, e.g., single-token linear probing (SLP) and P(True), typically analyze only the logit of the first generated token or just its highest scoring component overlooking richer signals embedded within earlier token distributions. We demonstrate that analyzing the complete sequence of early logits potentially provides substantially more diagnostic information. We emphasize that hallucinations may only emerge after several tokens, as subtle inconsistencies accumulate over time. By analyzing the Kullback-Leibler (KL) divergence between logits corresponding to hallucinated and non-hallucinated tokens, we underscore the importance of incorporating later-token logits to more accurately capture the reliability dynamics of VLMs. In response, we introduce Multi-Token Reliability Estimation (MTRE), a lightweight, white-box method that aggregates logits from the first ten tokens using multi-token log-likelihood ratios and self-attention. Despite the challenges posed by large vocabulary sizes and long logit sequences, MTRE remains efficient and tractable. On MAD-Bench, MM-SafetyBench, MathVista, and four compositional-geometry benchmarks, MTRE improves AUROC by 9.4 +/- 1.3 points over SLP and by 12.1 +/- 1.7 points over P(True), setting a new state-of-the-art in hallucination detection for open-source VLMs.
Problem

Research questions and friction points this paper is trying to address.

Detect hallucinations in vision-language models using multi-token analysis
Improve reliability by aggregating early token logits for accurate detection
Address limitations of single-token methods with lightweight white-box approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-token aggregation for hallucination detection
Kullback-Leibler divergence analysis of logits
Lightweight white-box method with self-attention