🤖 AI Summary
This work addresses the tendency of multimodal large language models to deviate from visual evidence during long-form text generation due to overreliance on textual priors, which often leads to reasoning distortions and hallucinations. To mitigate this issue, the authors propose the Visual Re-Examination (VRE) framework, which activates the model’s intrinsic late-stage visual verification capability through an information gain–driven self-reflection mechanism—without requiring additional visual inputs. VRE uniquely leverages the model’s own reflective trajectories to enable iterative self-evolution, integrating an attention-based self-supervised mechanism with a teacher-free training strategy to transform latent visual competence into actionable signals. Experiments demonstrate that VRE significantly improves reasoning accuracy and perceptual reliability across multiple multimodal benchmarks, particularly suppressing hallucinations in long-chain reasoning scenarios.
📝 Abstract
Multimodal Large Language Models (MLLMs) achieve strong multimodal reasoning performance, yet we identify a recurring failure mode in long-form generation: as outputs grow longer, models progressively drift away from image evidence and fall back on textual priors, resulting in ungrounded reasoning and hallucinations. Interestingly, Based on attention analysis, we find that MLLMs have a latent capability for late-stage visual verification that is present but not consistently activated. Motivated by this observation, we propose Visual Re-Examination (VRE), a self-evolving training framework that enables MLLMs to autonomously perform visual introspection during reasoning without additional visual inputs. Rather than distilling visual capabilities from a stronger teacher, VRE promotes iterative self-improvement by leveraging the model itself to generate reflection traces, making visual information actionable through information gain. Extensive experiments across diverse multimodal benchmarks demonstrate that VRE consistently improves reasoning accuracy and perceptual reliability, while substantially reducing hallucinations, especially in long-chain settings. Code is available at https://github.com/Xiaobu-USTC/VRE.