🤖 AI Summary
This work addresses the limitations of existing visual attention enhancement methods, which rely on fixed scaling factors and often fail to suppress—and may even exacerbate—hallucinations during generation. To overcome this, the authors propose AdaVBoost, a novel framework that introduces token-level adaptive visual attention enhancement for the first time. AdaVBoost dynamically assesses the hallucination risk of each token at every generation step using Visual Grounding Entropy (VGE) and accordingly modulates attention strength in real time. By moving beyond static strategies, this approach significantly reduces hallucination rates across multiple large vision-language models and state-of-the-art hallucination evaluation benchmarks, consistently outperforming current methods.
📝 Abstract
Visual attention boosting has emerged as a promising direction for mitigating hallucinations in Large Vision-Language Models (LVLMs), where existing methods primarily focus on where to boost by applying a predefined scaling to the attention of method-specific visual tokens during autoregressive generation. In this paper, we identify a fundamental trade-off in these methods: a predefined scaling factor can be too weak at some generation steps, leaving hallucinations unresolved, yet too strong at others, leading to new hallucinations. Motivated by this finding, we propose AdaVBoost, a token-level visual attention boosting framework that adaptively determines how much attention to boost at each generation step. Specifically, we introduce Visual Grounding Entropy (VGE) to estimate hallucination risk, which leverages visual grounding as a complementary signal to capture evidence mismatches beyond entropy. Guided by VGE, AdaVBoost applies stronger visual attention boosting to high-risk tokens and weaker boosting to low-risk tokens, enabling token-level adaptive intervention at each generation step. Extensive experiments show that AdaVBoost significantly outperforms baseline methods across multiple LVLMs and hallucination benchmarks.