🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate object hallucinations in image captioning due to excessive reliance on irrelevant visual tokens during autoregressive decoding. To address this, we propose an instruction-aligned visual attention mechanism that identifies and suppresses such spurious tokens by contrasting attention distributions across semantically distinct instructions—requiring no fine-tuning or auxiliary training. Our method dynamically evaluates token importance via contrastive decoding and applies logit reweighting to achieve fine-grained, instruction-driven hallucination suppression. Evaluated on MME, POPE, and TextVQA benchmarks, it significantly reduces object hallucination rates while outperforming existing decoding-time mitigation strategies. The approach is lightweight, plug-and-play, and fully compatible with frozen LVLMs. Code is publicly available.
📝 Abstract
Despite the significant success of Large Vision-Language models(LVLMs), these models still suffer hallucinations when describing images, generating answers that include non-existent objects. It is reported that these models tend to over-focus on certain irrelevant image tokens that do not contain critical information for answering the question and distort the output. To address this, we propose an Instruction-Aligned Visual Attention(IAVA) approach, which identifies irrelevant tokens by comparing changes in attention weights under two different instructions. By applying contrastive decoding, we dynamically adjust the logits generated from original image tokens and irrelevant image tokens, reducing the model's over-attention to irrelevant information. The experimental results demonstrate that IAVA consistently outperforms existing decoding techniques on benchmarks such as MME, POPE, and TextVQA in mitigating object hallucinations. Our IAVA approach is available online at https://github.com/Lee-lab558/IAVA.