Advancing Multimodal Reasoning: From Optimized Cold Start to Staged Reinforcement Learning

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit insufficient reasoning capabilities for complex mathematical and logical visual reasoning, and direct application of reinforcement learning (RL) fails to activate deep, multi-step cross-modal reasoning. Method: We propose a staged RL framework: (1) cold-start initialization using curated pure-text data—first demonstrating its superiority over most multimodal reasoning models; (2) enhancement of the GRPO algorithm to mitigate gradient stagnation in multimodal settings; and (3) a perception-anchored, cognition-reasoning co-evolutionary three-stage training paradigm (multimodal RL → text-only RL fine-tuning) with perception–cognition decoupling. Contribution/Results: Our ReVisual-R1 model establishes new open-source SOTA among 7B MLLMs on MathVerse, MathVision, LogicVista, and AIME2024/2025 benchmarks, achieving significant gains in multi-step cross-modal reasoning performance.

Technology Category

Application Category

📝 Abstract
Inspired by the remarkable reasoning capabilities of Deepseek-R1 in complex textual tasks, many works attempt to incentivize similar capabilities in Multimodal Large Language Models (MLLMs) by directly applying reinforcement learning (RL). However, they still struggle to activate complex reasoning. In this paper, rather than examining multimodal RL in isolation, we delve into current training pipelines and identify three crucial phenomena: 1) Effective cold start initialization is critical for enhancing MLLM reasoning. Intriguingly, we find that initializing with carefully selected text data alone can lead to performance surpassing many recent multimodal reasoning models, even before multimodal RL. 2) Standard GRPO applied to multimodal RL suffers from gradient stagnation, which degrades training stability and performance. 3) Subsequent text-only RL training, following the multimodal RL phase, further enhances multimodal reasoning. This staged training approach effectively balances perceptual grounding and cognitive reasoning development. By incorporating the above insights and addressing multimodal RL issues, we introduce ReVisual-R1, achieving a new state-of-the-art among open-source 7B MLLMs on challenging benchmarks including MathVerse, MathVision, WeMath, LogicVista, DynaMath, and challenging AIME2024 and AIME2025.
Problem

Research questions and friction points this paper is trying to address.

Optimizing cold start initialization for MLLM reasoning enhancement
Addressing gradient stagnation in standard GRPO for multimodal RL
Improving multimodal reasoning via staged text and RL training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimized cold start with text data
Staged reinforcement learning approach
Addresses gradient stagnation in GRPO
🔎 Similar Papers
No similar papers found.
S
Shuang Chen
Zhejiang University
Y
Yue Guo
Fudan University
Zhaochen Su
Zhaochen Su
Hong Kong University of Science and Technology
AI/LLM/LVLMAgentReasoning
Yafu Li
Yafu Li
The Chinese University of Hong Kong
ReasoningTrustworthy AIMultilinguality
Y
Yulun Wu
Zhejiang University
J
Jiacheng Chen
Shanghai AI Laboratory
J
Jiayu Chen
Fudan University
Weijie Wang
Weijie Wang
PhD Student, Zhejiang University
Computer VisionEfficient AIDeep Learning
Xiaoye Qu
Xiaoye Qu
Shanghai AI Lab
Y
Yu Cheng
The Chinese University of Hong Kong