INTER: Mitigating Hallucination in Large Vision-Language Models by Interaction Guidance Sampling

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) suffer from pervasive vision-language misalignment hallucinations. Inspired by human multimodal interactive cognition, this work first identifies and leverages an intrinsic cross-modal alignment mechanism within LVLMs, proposing a training-free, data-free, decoding-time interactive guidance sampling method. The approach dynamically models fine-grained interactions between visual and linguistic representations, enabling real-time calibration of generated text to align with image content during autoregressive generation. Evaluated on six mainstream VQA and image captioning benchmarks, it achieves an average performance gain of 3.4%, outperforming existing decoding strategies. The core contributions are: (1) uncovering LVLMs’ implicit multimodal interaction capability; and (2) establishing the first inference-only, zero-training-cost hallucination mitigation framework—demonstrating that effective alignment can be achieved purely through architectural introspection and decoding-time intervention.

Technology Category

Application Category

📝 Abstract
Hallucinations in large vision-language models (LVLMs) pose significant challenges for real-world applications, as LVLMs may generate responses that appear plausible yet remain inconsistent with the associated visual content. This issue rarely occurs in human cognition. We argue that this discrepancy arises from humans' ability to effectively leverage multimodal interaction information in data samples. Specifically, humans typically first gather multimodal information, analyze the interactions across modalities for understanding, and then express their understanding through language. Motivated by this observation, we conduct extensive experiments on popular LVLMs and obtained insights that surprisingly reveal human-like, though less pronounced, cognitive behavior of LVLMs on multimodal samples. Building on these findings, we further propose extbf{INTER}: extbf{Inter}action Guidance Sampling, a novel training-free algorithm that mitigate hallucinations without requiring additional data. Specifically, INTER explicitly guides LVLMs to effectively reapply their understanding of multimodal interaction information when generating responses, thereby reducing potential hallucinations. On six benchmarks including VQA and image captioning tasks, INTER achieves an average improvement of up to 3.4% on five LVLMs compared to the state-of-the-art decoding strategy. The code will be released when the paper is accepted.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in vision-language models
Improving multimodal interaction understanding in LVLMs
Enhancing response consistency with visual content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interaction Guidance Sampling reduces LVLM hallucinations
Training-free algorithm leverages multimodal interaction information
Improves performance on VQA and image captioning tasks
🔎 Similar Papers
No similar papers found.
X
Xin Dong
University of Chinese Academy of Sciences
Shichao Dong
Shichao Dong
Nanyang Technological University
J
Jin Wang
The University of Hong Kong
J
Jing Huang
University of Chinese Academy of Sciences
L
Li Zhou
Taobao & Tmall Group of Alibaba
Z
Zenghui Sun
Taobao & Tmall Group of Alibaba
L
Lihua Jing
University of Chinese Academy of Sciences
J
Jingsong Lan
Taobao & Tmall Group of Alibaba
Xiaoyong Zhu
Xiaoyong Zhu
Jiangsu University
Electrical MachinesElectrical Vehicle
B
Bo Zheng
Taobao & Tmall Group of Alibaba