🤖 AI Summary
This work addresses the hallucination problem in multimodal large language models (MLLMs), which often arises from insufficient or erroneous reliance on visual information during generation. Existing approaches typically require costly training supervision or introduce inference latency. To overcome these limitations, the authors propose Adaptive Image Reinforcement (AIR), a training-free framework that adaptively enhances visual grounding by compressing visual tokens via prototype clustering and dynamically selecting the most relevant visual cues through optimal transport to measure alignment between language states and image patches. These selected cues are then injected into the feed-forward layers. AIR achieves, for the first time, training-free adaptive selection of visual tokens, significantly reducing hallucination rates across multiple mainstream MLLMs while preserving general vision-language capabilities, thereby demonstrating its effectiveness and broad applicability.
📝 Abstract
Multimodal large language models (MLLMs) have achieved remarkable progress in vision-language reasoning, yet they remain vulnerable to hallucination, where generated content deviates from visual evidence. Existing mitigation strategies either require costly supervision during training or introduce additional latency at inference time. Recent vision enhancement methods attempt to address this issue by reinforcing visual tokens during decoding, but they typically inject all tokens indiscriminately, which causes interference from background regions and distracts the model from critical cues. To overcome this challenge, we propose Adaptive Visual Reinforcement (AIR), a training-free framework for MLLMs. AIR consists of two components. Prototype-based token reduction condenses the large pool of visual tokens into a compact subset to suppress redundancy. OT-guided patch reinforcement quantifies the alignment between hidden states and patch embeddings to selectively integrate the most consistent patches into feed-forward layers. As a result, AIR enhances the model's reliance on salient visual information and effectively mitigates hallucination. Extensive experiments across representative MLLMs demonstrate that AIR substantially reduces hallucination while preserving general capabilities, establishing it as an effective solution for building reliable MLLMs.