π€ AI Summary
This work addresses the susceptibility of large vision-language models (LVLMs) to hallucination during generation, a challenge exacerbated by existing methodsβ reliance on static, single-step states that fail to adapt to dynamic contextual shifts and mitigate information loss. To tackle this, the authors propose ACTβan inference-time, training-free intervention that adaptively fuses contextual signals to suppress hallucinations. ACT dynamically modulates attention heads during decoding to enhance spatiotemporal visual exploration and introduces a semantic query marginalization mechanism to aggregate visual evidence, effectively compensating for information loss inherent in discrete token prediction. Evaluated across multiple LVLMs, ACT significantly reduces hallucination rates while achieving state-of-the-art performance on both discriminative and generative benchmarks, all without compromising core language generation capabilities.
π Abstract
Large Vision-Language Models (LVLMs) frequently suffer from severe hallucination issues. Existing mitigation strategies predominantly rely on isolated, single-step states to enhance visual focus or suppress strong linguistic priors. However, these static approaches neglect dynamic context changes across the generation process and struggles to correct inherited information loss. To address this limitation, we propose Adaptive Context inTegration (ACT), a training-free inference intervention method that mitigates hallucination through the adaptive integration of contextual information. Specifically, we first propose visual context exploration, which leverages spatio-temporal profiling to adaptively amplify attention heads responsible for visual exploration. To further facilitate vision-language alignment, we propose semantic context aggregation that marginalizes potential semantic queries to effectively aggregate visual evidence, thereby resolving the information loss caused by the discrete nature of token prediction. Extensive experiments across diverse LVLMs demonstrate that ACT significantly reduces hallucinations and achieves competitive results on both discriminative and generative benchmarks, acting as a robust and highly adaptable solution without compromising fundamental generation capabilities.