Mitigating Object Hallucinations in LVLMs via Attention Imbalance Rectification

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliability limitations of large vision-language models (LVLMs) in high-stakes scenarios due to object hallucination. It introduces, for the first time, the concept of “attention imbalance” and establishes its causal relationship with hallucinatory behavior. To mitigate this issue, the authors propose AIR—a training-free, decoding-time intervention mechanism that lightweightly rebalances both cross-modal and token-level attention weights during inference. Extensive experiments across four prominent LVLMs and three benchmark datasets demonstrate that AIR reduces object hallucination rates by up to 35.1% while simultaneously improving general performance on diverse vision-language tasks by as much as 15.9%.

Technology Category

Application Category

📝 Abstract
Object hallucination in Large Vision-Language Models (LVLMs) severely compromises their reliability in real-world applications, posing a critical barrier to their deployment in high-stakes scenarios such as autonomous driving and medical image analysis. Through systematic empirical investigation, we identify that the imbalanced attention allocation, both across modalities (i.e., vision and language) and within modalities (among individual tokens), exhibits a strong causal correlation with the occurrence of object hallucination. Leveraging this insight, we introduce a novel concept termed attention imbalance, which not only quantifies the degree of attention disparity but also visually delineates the underlying patterns (e.g., over-attentiveness to irrelevant language tokens or under-attentiveness to discriminative visual features) that drive object hallucination. To mitigate object hallucination, we further propose Attention Imbalance Rectification (AIR), a lightweight decoding-time intervention method that reallocates attention weights and adjusts attention distributions to rectify modality-wise and token-wise imbalances. Extensive evaluations on four mainstream LVLMs and three benchmarks (CHAIR, POPE, and MM-Vet) with seven baselines demonstrate that AIR consistently reduces object hallucination rates, achieving up to a 35.1% reduction compared to the baselines, while improving up to 15.9% of LVLMs' general capability across diverse vision-language tasks.
Problem

Research questions and friction points this paper is trying to address.

object hallucination
Large Vision-Language Models
attention imbalance
reliability
real-world applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention imbalance
object hallucination
vision-language models
decoding-time intervention
attention rectification
🔎 Similar Papers
No similar papers found.