Mitigating Hallucinations in Large Vision-Language Models with Internal Fact-based Contrastive Decoding

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive object hallucination—i.e., the erroneous generation of nonexistent objects or attributes—in large vision-language models (LVLMs) during inference, this paper proposes a training-free, knowledge-external, model-agnostic, zero-cost decoding calibration method. The core innovation lies in constructing a fact-consistency contrastive mechanism grounded in internal representation perturbation; leveraging language bias amplification, it inversely suppresses hallucination-correlated logits and performs distributional calibration directly in the logit space. Evaluated on the POPE and MME benchmarks’ object hallucination subsets, the method improves accuracy by 9% and 8%, respectively, significantly mitigating both object-level and attribute-level hallucinations. It establishes an efficient, general-purpose post-hoc framework for enhancing LVLM robustness in visual reasoning without architectural modification or additional data or compute overhead.

Technology Category

Application Category

📝 Abstract
Large Visual Language Models (LVLMs) integrate visual and linguistic modalities, exhibiting exceptional performance across various multimodal tasks. Nevertheless, LVLMs remain vulnerable to the issue of object hallucinations. Previous efforts to mitigate this issue focus on supervised fine-tuning (SFT) or incorporating external knowledge, both of which entail significant costs related to training and the acquisition of external data. To address these challenges, we propose a novel model-agnostic approach termed Internal Fact-based Contrastive Decoding (IFCD), designed to mitigate and suppress hallucinations during the inference process of LVLMs by exploiting the LVLMs' own hallucinations. IFCD is grounded in experimental observations that alterations to the LVLMs' internal representations tend to amplify hallucinations caused by language bias. By contrasting disturbed distribution, IFCD calibrates the LVLMs' output and effectively removes the hallucinatory logits from the final predictions. Experimental results validate that IFCD significantly alleviates both object-level and attribute-level hallucinations while achieving an average 9% accuracy improvement on POPE and 8% accuracy improvement on MME object hallucinations subset compared with direct decoding, respectively.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Object Hallucination
Image-text Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

IFCD
Visual Language Models
Object Hallucination Reduction
🔎 Similar Papers
No similar papers found.
C
Chao Wang
School of Future Technology, Shanghai University, Shanghai, 200444, China; Institute of Artificial Intelligence, Shanghai University, Shanghai, 200444, China
X
Xuancheng Zhou
School of Future Technology, Shanghai University, Shanghai, 200444, China; Institute of Artificial Intelligence, Shanghai University, Shanghai, 200444, China
Weiwei Fu
Weiwei Fu
Fudan University
data assimilationinverse modelbiogeochemical cycles
Y
Yang Zhou
Institute of Artificial Intelligence, Shanghai University, Shanghai, 200444, China; School of Mechatronic Engineering and Automation, Shanghai, 200444, China