🤖 AI Summary
Multimodal large language models (MLLMs) suffer from severe hallucination in open-domain visual question answering—generating answers inconsistent with the input visual-semantic content. To address this, we propose the first closed-loop training framework that intrinsically suppresses hallucination during training via a “dual-perception–reverse-reasoning” mechanism. Our approach introduces: (1) a circular closed-loop training paradigm; (2) a triple feedback mechanism enforcing semantic reversibility, visual consistency, and attention alignment; and (3) a frozen Consistency Feedback Plugin (CFP), integrating semantic reconstruction, visual description generation, and attention supervision modules to impose multi-granularity cross-modal consistency constraints. Evaluated on multiple benchmarks, our method significantly reduces hallucination rates while improving factual accuracy and model interpretability.
📝 Abstract
While Multimodal Large Language Models (MLLMs) have achieved remarkable progress in open-ended visual question answering, they remain vulnerable to hallucinations. These are outputs that contradict or misrepresent input semantics, posing a critical challenge to the reliability and factual consistency. Existing methods often rely on external verification or post-hoc correction, lacking an internal mechanism to validate outputs directly during training. To bridge this gap, we propose ReLoop, a unified closed-loop training framework that encourages multimodal consistency for cross-modal understanding in MLLMs. ReLoop adopts a ring-shaped structure that integrates three complementary consistency feedback mechanisms, obliging MLLMs to "seeing twice and thinking backwards". Specifically, ReLoop employs the frozen Consistency Feedback Plugin (CFP), comprising semantic reconstruction, visual description, and an attention supervision module for attention alignment. These components collectively enforce semantic reversibility, visual consistency, and interpretable attention, enabling the model to correct its outputs during training. Extensive evaluations and analyses demonstrate the effectiveness of ReLoop in reducing hallucination rates across multiple benchmarks, establishing a robust method for hallucination mitigation in MLLMs. We will release our source code and data in the camera-ready version.