🤖 AI Summary
Existing RLVR methods suffer from insufficient synergy between depth (i.e., the maximum solvable problem difficulty) and breadth (i.e., the number of samples per training iteration), limiting the reasoning capability gains of large language models. To address this, we propose Difficulty-Adaptive Rollout Sampling (DARS) and its scalable training framework, DARS-B. DARS-B integrates multi-stage targeted rollouts, cumulative advantage reweighting, full-batch multi-turn updates, and token-level entropy regularization to enable efficient exploration of high-difficulty problems and low-noise optimization. We establish, for the first time, a new paradigm wherein depth and breadth act as orthogonal yet synergistic dimensions that jointly enhance reasoning performance. Experiments demonstrate significant improvements in Pass@K and Pass@1—without incurring additional inference overhead—validating both the effectiveness and generalizability of this depth-breadth co-enhancement mechanism.
📝 Abstract
Reinforcement Learning with Verifiable Reward (RLVR) has emerged as a powerful paradigm for unlocking reasoning capabilities in large language models, yet its full potential is hindered by two under-explored dimensions: Depth-the hardest problem a model can sample; Breadth-the number of instances consumed in a single iteration. We dissect the popular GRPO algorithm and reveal a systematic bias: the cumulative-advantage disproportionately weights samples with medium accuracy, while down-weighting the low-accuracy instances that are crucial for pushing reasoning boundaries. To rectify the depth neglect, we introduce Difficulty Adaptive Rollout Sampling (DARS), which re-weights hard problems through targeted multi-stage rollouts, thereby increasing the number of positive rollouts for hard problems. Empirically, naively enlarging rollout size only accelerates convergence and even hurts Pass@K. Our DARS, in contrast, delivers consistent Pass@K gains without extra inference cost at convergence. Just as we adaptively expanded the depth of exploration, we now ask whether aggressively scaling the breadth of training data can further amplify reasoning gains. To this end, we intensely scale batch size and replace PPO's mini-batch iterations with full-batch updates over multiple epochs. Increasing breadth significantly enhances Pass@1 performance. Large-breadth training sustains high token-level entropy, indicating continued exploration and reduced gradient noise. We further present DARS-B, which augments DARS with large breadth, and demonstrate simultaneous gains in Pass@K and Pass@1. The results confirm that breadth and adaptive exploration across depth operate as orthogonal dimensions in RLVR, which are key to unleashing the reasoning power of RLVR.