🤖 AI Summary
To address pervasive visual hallucinations in large vision-language models (LVLMs), this paper proposes Mix-of-Decoding (MoD), a fine-tuning-free, zero-shot adaptive decoding strategy. MoD introduces attention consistency as the criterion for dynamic decoding-path selection: it quantifies the alignment between the model’s attention distribution over image tokens and underlying visual semantics, then switches between complementary enhancement and contrastive suppression decoding modes accordingly. Its core innovation is an image-token masking-and-reconstruction mechanism to assess attention correctness, enabling real-time hallucination-aware scheduling. Evaluated across multiple mainstream benchmarks, MoD significantly reduces hallucination rates while improving answer accuracy and visual factual consistency—outperforming all existing decoding methods.
📝 Abstract
Large Vision-Language Models (LVLMs) have exhibited impressive capabilities across various visual tasks, yet they remain hindered by the persistent challenge of hallucinations. To address this critical issue, we propose Mixture of Decoding (MoD), a novel approach for hallucination mitigation that dynamically adapts decoding strategies by evaluating the correctness of the model's attention on image tokens. Specifically, MoD measures the consistency between outputs generated from the original image tokens and those derived from the model's attended image tokens, to distinguish the correctness aforementioned. If the outputs are consistent, indicating correct attention, MoD employs a complementary strategy to amplify critical information. Conversely, if the outputs are inconsistent, suggesting erroneous attention, MoD utilizes a contrastive strategy to suppress misleading information. Extensive experiments demonstrate that MoD significantly outperforms existing decoding methods across multiple mainstream benchmarks, effectively mitigating hallucinations in LVLMs. The code is available at https://github.com/xlchen0205/MoD.