Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent issue of object hallucination in large vision-language models (LVLMs), where generated image captions often include objects absent from the input image due to overreliance on language priors. To mitigate this, the authors propose a training-free self-verification framework that evaluates the confidence of object presence in candidate descriptions through a mechanism independent of language priors, enabling effective description selection or aggregation. The study presents the first systematic analysis of how reliance on language priors and generation length influence hallucination, thereby unlocking the intrinsic capabilities of LVLMs without additional training. Evaluated on LLaVA-v1.5-7B, the proposed method achieves a 65.6% improvement in the CHAI RI metric, substantially outperforming current state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Despite progress in Large Vision Language Models (LVLMs), object hallucination remains a critical issue in image captioning task, where models generate descriptions of non-existent objects, compromising their reliability. Previous work attributes this to LVLMs'over-reliance on language priors and attempts to mitigate it through logits calibration. However, they still lack a thorough analysis of the over-reliance. To gain a deeper understanding of over-reliance, we conduct a series of preliminary experiments, indicating that as the generation length increases, LVLMs'over-reliance on language priors leads to inflated probability of hallucinated object tokens, consequently exacerbating object hallucination. To circumvent this issue, we propose Language-Prior-Free Verification to enable LVLMs to faithfully verify the confidence of object existence. Based on this, we propose a novel training-free Self-Validation Framework to counter the over-reliance trap. It first validates objects'existence in sampled candidate captions and further mitigates object hallucination via caption selection or aggregation. Experiment results demonstrate that our framework mitigates object hallucination significantly in image captioning task (e.g., 65.6% improvement on CHAIRI metric with LLaVA-v1.5-7B), surpassing the previous SOTA methods. This result highlights a novel path towards mitigating hallucination by unlocking the inherent potential within LVLMs themselves.
Problem

Research questions and friction points this paper is trying to address.

object hallucination
Large Vision Language Models
language priors
image captioning
over-reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

object hallucination
large vision language models
self-validation framework
language priors
training-free mitigation
🔎 Similar Papers
No similar papers found.