π€ AI Summary
This work addresses the challenges of uneven rollout allocation and dynamic imbalance in policy optimization that hinder reinforcement learning for reasoning tasksβsuch as uniform rollouts disregarding gradient variance disparities, softmax-induced gradient attenuation for high-confidence actions, and training instability. To tackle these issues, the authors propose DynaMO, a framework that dynamically allocates rollouts at the sequence level by minimizing gradient variance, introduces advantage modulation at the token level to compensate for gradient decay, and stabilizes update magnitudes through entropy variation monitoring. The key contributions include the first theoretical derivation of a Bernoulli-variance-based rollout allocation criterion and a novel gradient-aware advantage modulation mechanism. Experiments demonstrate that DynaMO significantly outperforms existing RLVR methods across multiple mathematical reasoning benchmarks, achieving both high efficiency and robustness.
π Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective for Large Language Model (LLM) reasoning, yet current methods face key challenges in resource allocation and policy optimization dynamics: (i) uniform rollout allocation ignores gradient variance heterogeneity across problems, and (ii) the softmax policy structure causes gradient attenuation for high-confidence correct actions, while excessive gradient updates may destabilize training. Therefore, we propose DynaMO, a theoretically-grounded dual-pronged optimization framework. At the sequence level, we prove that uniform allocation is suboptimal and derive variance-minimizing allocation from the first principle, establishing Bernoulli variance as a computable proxy for gradient informativeness. At the token level, we develop gradient-aware advantage modulation grounded in theoretical analysis of gradient magnitude bounds. Our framework compensates for gradient attenuation of high-confidence correct actions while utilizing entropy changes as computable indicators to stabilize excessive update magnitudes. Extensive experiments conducted on a diverse range of mathematical reasoning benchmarks demonstrate consistent improvements over strong RLVR baselines. Our implementation is available at: \href{https://anonymous.4open.science/r/dynamo-680E/README.md}{https://anonymous.4open.science/r/dynamo}.