🤖 AI Summary
Existing reinforcement learning methods for enhancing chain-of-thought (CoT) reasoning in multimodal large language models (MLLMs) suffer from poor generalization and weak out-of-distribution (OOD) robustness. To address this, we propose a novel framework integrating controllable noise-based exploration with Bayesian advantage estimation: Gaussian noise is injected during visual encoding to increase input diversity and encourage exploratory reasoning; an advantage posterior is derived by jointly modeling a noise prior and the trajectory reward likelihood, thereby guiding the model toward visually grounded and reliable CoT paths. The method requires no additional human annotations and is compatible with small-scale MLLMs (e.g., Qwen2.5-VL 3B). Experiments demonstrate significant improvements in CoT quality, general reasoning capability, and hallucination suppression—particularly under OOD and input-noise perturbation settings—showcasing superior generalization and robustness.
📝 Abstract
Reinforcement learning (RL) has shown promise in enhancing the general Chain-of-Thought (CoT) reasoning capabilities of multimodal large language models (MLLMs). However, when applied to improve general CoT reasoning, existing RL frameworks often struggle to generalize beyond the training distribution. To address this, we propose NoisyGRPO, a systematic multimodal RL framework that introduces controllable noise into visual inputs for enhanced exploration and explicitly models the advantage estimation process via a Bayesian framework. Specifically, NoisyGRPO improves RL training by: (1) extbf{Noise-Injected Exploration Policy}: Perturbing visual inputs with Gaussian noise to encourage exploration across a wider range of visual scenarios; and (2) extbf{Bayesian Advantage Estimation}: Formulating advantage estimation as a principled Bayesian inference problem, where the injected noise level serves as a prior and the observed trajectory reward as the likelihood. This Bayesian modeling fuses both sources of information to compute a robust posterior estimate of trajectory advantage, effectively guiding MLLMs to prefer visually grounded trajectories over noisy ones. Experiments on standard CoT quality, general capability, and hallucination benchmarks demonstrate that NoisyGRPO substantially improves generalization and robustness, especially in RL settings with small-scale MLLMs such as Qwen2.5-VL 3B. The project page is available at href{https://artanic30.github.io/project_pages/NoisyGRPO/}{ exttt{https://artanic30.github.io/project_pages/NoisyGRPO}}.