🤖 AI Summary
This work addresses the susceptibility of large vision-language models (LVLMs) to language priors, which often leads to hallucinated outputs decoupled from visual inputs. To mitigate this issue, the authors propose a training-free residual decoding method that introduces, for the first time, a history-aware residual guidance mechanism. This approach leverages the model’s internal inference history and the dynamic evolution of token logits to correct decoding biases without modifying the model architecture or requiring additional training. The method effectively suppresses hallucinations induced by language priors, significantly enhancing visual grounding and alignment while preserving general multimodal comprehension capabilities. Extensive evaluations demonstrate state-of-the-art performance across multiple LVLM benchmarks.
📝 Abstract
Large Vision-Language Models (LVLMs) can reason effectively from image-text inputs and perform well in various multimodal tasks. Despite this success, they are affected by language priors and often produce hallucinations. Hallucinations denote generated content that is grammatically and syntactically coherent, yet bears no match or direct relevance to actual visual input. To address this problem, we propose Residual Decoding (ResDec). It is a novel training-free method that uses historical information to aid decoding. The method relies on the internal implicit reasoning mechanism and token logits evolution mechanism of LVLMs to correct biases. Extensive experiments demonstrate that ResDec effectively suppresses hallucinations induced by language priors, significantly improves visual grounding, and reduces object hallucinations. In addition to mitigating hallucinations, ResDec also performs exceptionally well on comprehensive LVLM benchmarks, highlighting its broad applicability.