OpenVLThinkerV2: A Generalist Multimodal Reasoning Model for Multi-domain Visual Tasks

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by open-source multimodal generalist models in reinforcement learning, particularly the large discrepancies in reward distributions across tasks and the difficulty in jointly optimizing perception and reasoning capabilities. To overcome these issues, the authors propose Gaussian GRPO (G²RPO), a method that enforces alignment of the advantage function to a standard normal distribution via nonlinear distribution matching, thereby ensuring gradient fairness across tasks and enhancing training stability. Additionally, G²RPO incorporates dual shaping mechanisms—response-length shaping and entropy shaping—to dynamically balance visual perception depth with multi-step reasoning exploration. Evaluated on 18 diverse vision-based benchmarks, the proposed approach significantly outperforms both existing open-source and state-of-the-art closed-source models, demonstrating exceptional generalization and robustness.
📝 Abstract
Group Relative Policy Optimization (GRPO) has emerged as the de facto Reinforcement Learning (RL) objective driving recent advancements in Multimodal Large Language Models. However, extending this success to open-source multimodal generalist models remains heavily constrained by two primary challenges: the extreme variance in reward topologies across diverse visual tasks, and the inherent difficulty of balancing fine-grained perception with multi-step reasoning capabilities. To address these issues, we introduce Gaussian GRPO (G$^2$RPO), a novel RL training objective that replaces standard linear scaling with non-linear distributional matching. By mathematically forcing the advantage distribution of any given task to strictly converge to a standard normal distribution, $\mathcal{N}(0,1)$, G$^2$RPO theoretically ensures inter-task gradient equity, mitigates vulnerabilities to heavy-tail outliers, and offers symmetric update for positive and negative rewards. Leveraging the enhanced training stability provided by G$^2$RPO, we introduce two task-level shaping mechanisms to seamlessly balance perception and reasoning. First, response length shaping dynamically elicits extended reasoning chains for complex queries while enforce direct outputs to bolster visual grounding. Second, entropy shaping tightly bounds the model's exploration zone, effectively preventing both entropy collapse and entropy explosion. Integrating these methodologies, we present OpenVLThinkerV2, a highly robust, general-purpose multimodal model. Extensive evaluations across 18 diverse benchmarks demonstrate its superior performance over strong open-source and leading proprietary frontier models.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Reasoning
Reinforcement Learning
Reward Variance
Perception-Reasoning Balance
Generalist Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian GRPO
distributional matching
multimodal reasoning
entropy shaping
response length shaping
🔎 Similar Papers
No similar papers found.
Wenbo Hu
Wenbo Hu
UCLA
Computer VisionNLPEmbodied AI
X
Xin Chen
University of California, Los Angeles (UCLA)
Y
Yan Gao-Tian
University of California, Los Angeles (UCLA)
Yihe Deng
Yihe Deng
University of California, Los Angeles
Machine LearningNatural Language Processing
N
Nanyun Peng
University of California, Los Angeles (UCLA)
Kai-Wei Chang
Kai-Wei Chang
Associate Professor, UCLA
Natural Language ProcessingMachine LearningVision-LanguageTrustworthy NLP