🤖 AI Summary
Large vision-language models (LVLMs) often hallucinate semantically plausible yet image-irrelevant content due to overreliance on linguistic priors, undermining visual faithfulness. To address this, we propose Conditional Pointwise Mutual Information (C-PMI)–guided decoding: a novel framework that jointly models visual and textual tokens within a C-PMI objective, formulated as a bilevel optimization problem; and introduces a dynamic token purification mechanism that collaboratively refines multimodal representations to strengthen cross-modal alignment. Crucially, our method requires no model fine-tuning or additional parameters, preserving the original decoding efficiency. Extensive experiments across multiple benchmarks demonstrate significant reductions in hallucination rates, validating the effectiveness and generalizability of mutual-information–driven adaptive decoding for enhancing visual fidelity.
📝 Abstract
Large Vision-Language Models (LVLMs) are susceptible to hallucinations, where generated responses seem semantically plausible yet exhibit little or no relevance to the input image. Previous studies reveal that this issue primarily stems from LVLMs' over-reliance on language priors while disregarding the visual information during decoding. To alleviate this issue, we introduce a novel Conditional Pointwise Mutual Information (C-PMI) calibrated decoding strategy, which adaptively strengthens the mutual dependency between generated texts and input images to mitigate hallucinations. Unlike existing methods solely focusing on text token sampling, we propose to jointly model the contributions of visual and textual tokens to C-PMI, formulating hallucination mitigation as a bi-level optimization problem aimed at maximizing mutual information. To solve it, we design a token purification mechanism that dynamically regulates the decoding process by sampling text tokens remaining maximally relevant to the given image, while simultaneously refining image tokens most pertinent to the generated response. Extensive experiments across various benchmarks reveal that the proposed method significantly reduces hallucinations in LVLMs while preserving decoding efficiency.