🤖 AI Summary
This work addresses the instability in training multimodal large language models with GRPO reinforcement learning, which often stems from sparse rewards and vanishing advantage signals—particularly when tasks are either too easy or too difficult, leading to insufficient optimization signals. To mitigate this, the authors propose a difficulty-adaptive variant advantage method that dynamically assesses task difficulty through a global difficulty-aware mechanism, samples difficulty-matched variants, and computes advantage values weighted and normalized by difficulty by integrating both local and global group information. This approach effectively alleviates reward sparsity and advantage collapse, achieving significant performance gains over existing methods across six mainstream multimodal reasoning benchmarks while simultaneously improving both training efficiency and inference performance.
📝 Abstract
Reinforcement learning (RL) with group relative policy optimization (GRPO) has become a widely adopted approach for enhancing the reasoning capabilities of multimodal large language models (MLLMs). While GRPO enables long-chain reasoning without a critic, it often suffers from sparse rewards on difficult problems and advantage vanishing when group-level rewards are too consistent for overly easy or hard problems. Existing solutions (sample expansion, selective utilization, and indirect reward design) often fail to maintain enough variance in within-group reward distributions to yield clear optimization signals. To address this, we propose DIVA-GRPO, a difficulty-adaptive variant advantage method that adjusts variant difficulty distributions from a global perspective. DIVA-GRPO dynamically assesses problem difficulty, samples variants with appropriate difficulty levels, and calculates advantages across local and global groups using difficulty-weighted and normalized scaling. This alleviates reward sparsity and advantage vanishing while improving training stability. Extensive experiments on six reasoning benchmarks demonstrate that DIVA-GRPO outperforms existing approaches in training efficiency and reasoning performance. Code: https://github.com/Siaaaaaa1/DIVA-GRPO