SAGE: Sink-Aware Grounded Decoding for Multimodal Hallucination Mitigation

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent issue of hallucination in large vision-language models, where generated text often contradicts visual input. The authors propose a decoding-stage dynamic control method that requires neither retraining nor architectural modifications. Their approach leverages “sink tokens”—identified within self-attention mechanisms—as visual grounding anchors. By integrating self-attention maps with gradient-based attribution techniques, the method continuously evaluates the semantic consistency between generated content and the image in real time. Based on this assessment, it dynamically sharpens or smooths the attention distribution to suppress hallucinatory outputs. Evaluated on the MSCOCO and AMBER benchmarks, the technique achieves relative hallucination reductions of 10.65% and 7.19%, respectively, while preserving descriptive coverage.
📝 Abstract
Large vision-language models (VLMs) frequently suffer from hallucinations, generating content that is inconsistent with visual inputs. Existing methods typically address this problem through post-hoc filtering, additional training objectives, or external verification, but they do not intervene during the decoding process when hallucinations arise. In this work, we introduce SAGE, a Sink-Aware Grounded Decoding framework that mitigates hallucinations by dynamically modulating self-attention during generation. Hallucinations are strongly correlated with attention sink tokens - punctuation or function tokens that accumulate disproportionate attention despite carrying limited semantic content. SAGE leverages these tokens as anchors to monitor grounding reliability in real time. At each sink trigger, the method extracts semantic concepts from the generated sequence, estimates their visual grounding using both self-attention maps and gradient-based attribution, and measures their spatial agreement. Based on this signal, self-attention distributions are adaptively sharpened or broadened to reinforce grounded regions or suppress unreliable ones. Extensive experiments across diverse hallucination benchmarks demonstrate that SAGE consistently outperforms existing decoding strategies, achieving substantial reductions in hallucination while preserving descriptive coverage, without requiring model retraining or architectural modifications. Our method achieves an average relative improvement of 10.65% on MSCOCO and 7.19% on AMBER across diverse VLM architectures, demonstrating consistent gains in hallucination mitigation.
Problem

Research questions and friction points this paper is trying to address.

multimodal hallucination
vision-language models
decoding process
attention sink
visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination mitigation
grounded decoding
attention sink
vision-language models
self-attention modulation
🔎 Similar Papers
No similar papers found.