🤖 AI Summary
This work addresses object hallucination in multimodal large language models—where models generate descriptions of objects absent from the input image—often caused by insufficient visual-linguistic alignment. The authors propose a prompt-agnostic, model-agnostic, plug-and-play method that leverages object-centric attention mechanisms within a self-supervised Vision Transformer to construct an auxiliary view. This auxiliary view identifies and masks the most salient yet unsupported visual evidence, thereby strengthening the contrastive signal in Visual Contrastive Decoding (VCD). Requiring only a single, cacheable forward pass, the approach consistently and significantly improves performance across two mainstream object hallucination benchmarks on two distinct multimodal large language models.
📝 Abstract
We study object hallucination in Multimodal Large Language Models (MLLMs) and improve visual contrastive decoding (VCD) by constructing an object-aligned auxiliary view. We leverage object-centric attention in self-supervised Vision Transformers. In particular, we remove the most salient visual evidence to construct an auxiliary view that disrupts unsupported tokens and produces a stronger contrast signal. Our method is prompt-agnostic, model-agnostic, and can be seamlessly plugged into the existing VCD pipeline with little computation overhead, i.e., a single cacheable forward pass. Empirically, our method demonstrates consistent gains on two popular object hallucination benchmarks across two MLLMs.