Residual Decoding: Mitigating Hallucinations in Large Vision-Language Models via History-Aware Residual Guidance

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of large vision-language models (LVLMs) to language priors, which often leads to hallucinated outputs decoupled from visual inputs. To mitigate this issue, the authors propose a training-free residual decoding method that introduces, for the first time, a history-aware residual guidance mechanism. This approach leverages the model’s internal inference history and the dynamic evolution of token logits to correct decoding biases without modifying the model architecture or requiring additional training. The method effectively suppresses hallucinations induced by language priors, significantly enhancing visual grounding and alignment while preserving general multimodal comprehension capabilities. Extensive evaluations demonstrate state-of-the-art performance across multiple LVLM benchmarks.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) can reason effectively from image-text inputs and perform well in various multimodal tasks. Despite this success, they are affected by language priors and often produce hallucinations. Hallucinations denote generated content that is grammatically and syntactically coherent, yet bears no match or direct relevance to actual visual input. To address this problem, we propose Residual Decoding (ResDec). It is a novel training-free method that uses historical information to aid decoding. The method relies on the internal implicit reasoning mechanism and token logits evolution mechanism of LVLMs to correct biases. Extensive experiments demonstrate that ResDec effectively suppresses hallucinations induced by language priors, significantly improves visual grounding, and reduces object hallucinations. In addition to mitigating hallucinations, ResDec also performs exceptionally well on comprehensive LVLM benchmarks, highlighting its broad applicability.
Problem

Research questions and friction points this paper is trying to address.

hallucinations
large vision-language models
language priors
visual grounding
object hallucinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Decoding
Hallucination Mitigation
Vision-Language Models
Language Priors
Training-Free Method
🔎 Similar Papers
No similar papers found.