Mixture of Decoding: An Attention-Inspired Adaptive Decoding Strategy to Mitigate Hallucinations in Large Vision-Language Models

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive visual hallucinations in large vision-language models (LVLMs), this paper proposes Mix-of-Decoding (MoD), a fine-tuning-free, zero-shot adaptive decoding strategy. MoD introduces attention consistency as the criterion for dynamic decoding-path selection: it quantifies the alignment between the model’s attention distribution over image tokens and underlying visual semantics, then switches between complementary enhancement and contrastive suppression decoding modes accordingly. Its core innovation is an image-token masking-and-reconstruction mechanism to assess attention correctness, enabling real-time hallucination-aware scheduling. Evaluated across multiple mainstream benchmarks, MoD significantly reduces hallucination rates while improving answer accuracy and visual factual consistency—outperforming all existing decoding methods.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) have exhibited impressive capabilities across various visual tasks, yet they remain hindered by the persistent challenge of hallucinations. To address this critical issue, we propose Mixture of Decoding (MoD), a novel approach for hallucination mitigation that dynamically adapts decoding strategies by evaluating the correctness of the model's attention on image tokens. Specifically, MoD measures the consistency between outputs generated from the original image tokens and those derived from the model's attended image tokens, to distinguish the correctness aforementioned. If the outputs are consistent, indicating correct attention, MoD employs a complementary strategy to amplify critical information. Conversely, if the outputs are inconsistent, suggesting erroneous attention, MoD utilizes a contrastive strategy to suppress misleading information. Extensive experiments demonstrate that MoD significantly outperforms existing decoding methods across multiple mainstream benchmarks, effectively mitigating hallucinations in LVLMs. The code is available at https://github.com/xlchen0205/MoD.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in Large Vision-Language Models
Dynamically adapting decoding strategies based on attention correctness
Enhancing output consistency by evaluating image token attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic decoding strategy adaptation
Attention correctness evaluation mechanism
Contrastive-complementary information modulation
X
Xinlong Chen
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA); School of Artificial Intelligence, University of Chinese Academy of Sciences
Yuanxing Zhang
Yuanxing Zhang
Kuaishou Technology
Recommender SystemLarge Language ModelVideo Understanding
Q
Qiang Liu
New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA); School of Artificial Intelligence, University of Chinese Academy of Sciences
Junfei Wu
Junfei Wu
Institute of Automation, Chinese Academy of Sciences
Multimodal ReasoningLarge Vision-Language ModelFake News Detection
F
Fuzheng Zhang
Kuaishou Technology
Tieniu Tan
Tieniu Tan
Institute of Automation, Chinese Academy of Sciences