🤖 AI Summary
To address pervasive object hallucination—i.e., the erroneous generation of nonexistent objects or attributes—in large vision-language models (LVLMs) during inference, this paper proposes a training-free, knowledge-external, model-agnostic, zero-cost decoding calibration method. The core innovation lies in constructing a fact-consistency contrastive mechanism grounded in internal representation perturbation; leveraging language bias amplification, it inversely suppresses hallucination-correlated logits and performs distributional calibration directly in the logit space. Evaluated on the POPE and MME benchmarks’ object hallucination subsets, the method improves accuracy by 9% and 8%, respectively, significantly mitigating both object-level and attribute-level hallucinations. It establishes an efficient, general-purpose post-hoc framework for enhancing LVLM robustness in visual reasoning without architectural modification or additional data or compute overhead.
📝 Abstract
Large Visual Language Models (LVLMs) integrate visual and linguistic modalities, exhibiting exceptional performance across various multimodal tasks. Nevertheless, LVLMs remain vulnerable to the issue of object hallucinations. Previous efforts to mitigate this issue focus on supervised fine-tuning (SFT) or incorporating external knowledge, both of which entail significant costs related to training and the acquisition of external data. To address these challenges, we propose a novel model-agnostic approach termed Internal Fact-based Contrastive Decoding (IFCD), designed to mitigate and suppress hallucinations during the inference process of LVLMs by exploiting the LVLMs' own hallucinations. IFCD is grounded in experimental observations that alterations to the LVLMs' internal representations tend to amplify hallucinations caused by language bias. By contrasting disturbed distribution, IFCD calibrates the LVLMs' output and effectively removes the hallucinatory logits from the final predictions. Experimental results validate that IFCD significantly alleviates both object-level and attribute-level hallucinations while achieving an average 9% accuracy improvement on POPE and 8% accuracy improvement on MME object hallucinations subset compared with direct decoding, respectively.