🤖 AI Summary
It remains unclear whether existing reinforcement learning (RL) post-training methods genuinely encourage multimodal large language models to leverage visual information. This work proposes an analytical framework that treats hallucination as a diagnostic signal: by applying modality-specific perturbations to induce controllable hallucinations, the model is compelled to rely on hallucinatory reasoning during both training and evaluation, thereby revealing underlying RL dynamics and intrinsic dataset characteristics. The experiments provide the first systematic evidence that RL post-training can substantially enhance reasoning performance—even when based solely on hallucinated information—and in certain scenarios even outperforms standard training. These findings challenge the conventional assumption that multimodal models must depend on authentic visual inputs for effective reasoning.
📝 Abstract
The recent success of reinforcement learning (RL) in large reasoning models has inspired the growing adoption of RL for post-training Multimodal Large Language Models (MLLMs) to enhance their visual reasoning capabilities. Although many studies have reported improved performance, it remains unclear whether RL training truly enables models to learn from visual information. In this work, we propose the Hallucination-as-Cue Framework, an analytical framework designed to investigate the effects of RL-based post-training on multimodal reasoning models from the perspective of model hallucination. Specifically, we introduce hallucination-inductive, modality-specific corruptions that remove or replace essential information required to derive correct answers, thereby forcing the model to reason by hallucination. By applying these corruptions during both training and evaluation, our framework provides a unique perspective for diagnosing RL training dynamics and understanding the intrinsic properties of datasets. Through extensive experiments and analyses across multiple multimodal reasoning benchmarks, we reveal that the role of model hallucination for RL-training is more significant than previously recognized. For instance, we find that RL post-training under purely hallucination-inductive settings can still significantly improve models' reasoning performance, and in some cases even outperform standard training. These findings challenge prevailing assumptions about MLLM reasoning training and motivate the development of more modality-aware RL-based training designs.