🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate hallucinated outputs that are syntactically coherent but visually inconsistent. This paper identifies three intrinsic mechanisms underlying such hallucinations: progressive attenuation of visual information during autoregressive decoding, premature over-activation of semantic representations, and suppression of critical visual cues in hidden layers. To address this, we propose VISTA—a training-free, inference-time intervention framework—featuring a novel dual-path suppression strategy that jointly reinforces visual alignment in the activation space and guides early-layer semantics. VISTA is decoder-agnostic and requires no supervision or fine-tuning. Technically, it integrates dynamic token-level logits analysis, cross-layer activation reweighting, vision-aware logits enhancement, and early-layer semantic distillation. Evaluated across four benchmarks, four LVLM architectures, and three decoding strategies, VISTA reduces average hallucination rates by ~40%, significantly outperforming existing methods.
📝 Abstract
Large Vision-Language Models (LVLMs) can reason effectively over both textual and visual inputs, but they tend to hallucinate syntactically coherent yet visually ungrounded contents. In this paper, we investigate the internal dynamics of hallucination by examining the tokens logits rankings throughout the generation process, revealing three key patterns in how LVLMs process information: (1) gradual visual information loss -- visually grounded tokens gradually become less favored throughout generation, and (2) early excitation -- semantically meaningful tokens achieve peak activation in the layers earlier than the final layer. (3) hidden genuine information -- visually grounded tokens though not being eventually decided still retain relatively high rankings at inference. Based on these insights, we propose VISTA (Visual Information Steering with Token-logit Augmentation), a training-free inference-time intervention framework that reduces hallucination while promoting genuine information. VISTA works by combining two complementary approaches: reinforcing visual information in activation space and leveraging early layer activations to promote semantically meaningful decoding. Compared to existing methods, VISTA requires no external supervision and is applicable to various decoding strategies. Extensive experiments show that VISTA on average reduces hallucination by abount 40% on evaluated open-ended generation task, and it consistently outperforms existing methods on four benchmarks across four architectures under three decoding strategies.