Depth-Breadth Synergy in RLVR: Unlocking LLM Reasoning Gains with Adaptive Exploration

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RLVR methods suffer from insufficient synergy between depth (i.e., the maximum solvable problem difficulty) and breadth (i.e., the number of samples per training iteration), limiting the reasoning capability gains of large language models. To address this, we propose Difficulty-Adaptive Rollout Sampling (DARS) and its scalable training framework, DARS-B. DARS-B integrates multi-stage targeted rollouts, cumulative advantage reweighting, full-batch multi-turn updates, and token-level entropy regularization to enable efficient exploration of high-difficulty problems and low-noise optimization. We establish, for the first time, a new paradigm wherein depth and breadth act as orthogonal yet synergistic dimensions that jointly enhance reasoning performance. Experiments demonstrate significant improvements in Pass@K and Pass@1—without incurring additional inference overhead—validating both the effectiveness and generalizability of this depth-breadth co-enhancement mechanism.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Reward (RLVR) has emerged as a powerful paradigm for unlocking reasoning capabilities in large language models, yet its full potential is hindered by two under-explored dimensions: Depth-the hardest problem a model can sample; Breadth-the number of instances consumed in a single iteration. We dissect the popular GRPO algorithm and reveal a systematic bias: the cumulative-advantage disproportionately weights samples with medium accuracy, while down-weighting the low-accuracy instances that are crucial for pushing reasoning boundaries. To rectify the depth neglect, we introduce Difficulty Adaptive Rollout Sampling (DARS), which re-weights hard problems through targeted multi-stage rollouts, thereby increasing the number of positive rollouts for hard problems. Empirically, naively enlarging rollout size only accelerates convergence and even hurts Pass@K. Our DARS, in contrast, delivers consistent Pass@K gains without extra inference cost at convergence. Just as we adaptively expanded the depth of exploration, we now ask whether aggressively scaling the breadth of training data can further amplify reasoning gains. To this end, we intensely scale batch size and replace PPO's mini-batch iterations with full-batch updates over multiple epochs. Increasing breadth significantly enhances Pass@1 performance. Large-breadth training sustains high token-level entropy, indicating continued exploration and reduced gradient noise. We further present DARS-B, which augments DARS with large breadth, and demonstrate simultaneous gains in Pass@K and Pass@1. The results confirm that breadth and adaptive exploration across depth operate as orthogonal dimensions in RLVR, which are key to unleashing the reasoning power of RLVR.
Problem

Research questions and friction points this paper is trying to address.

Optimizing depth and breadth dimensions in RLVR for LLM reasoning
Addressing systematic bias in cumulative-advantage weighting of samples
Enhancing reasoning performance through adaptive exploration strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Difficulty Adaptive Rollout Sampling for hard problems
Large-batch full-update training for breadth scaling
Combined DARS-B approach for orthogonal dimension gains
🔎 Similar Papers
No similar papers found.
Z
Zhicheng Yang
The Hong Kong University of Science and Technology (Guangzhou)
Zhijiang Guo
Zhijiang Guo
HKUST (GZ) | HKUST
Natural Language ProcessingMachine LearningLarge Language Models
Yinya Huang
Yinya Huang
Postdoc Fellow at ETH AI Center, ETH Zürich; Prev. CityU Hong Kong, SYSU
AI for MathAI for ScienceReliable Machine LearningLLMsNLP
Y
Yongxin Wang
MBZUAI
D
Dongchun Xie
Sun Yat-sen University
Y
Yiwei Wang
University of California, Merced
Xiaodan Liang
Xiaodan Liang
Professor of Computer Science, Sun Yat-sen University, MBZUAI, CMU, NUS
Computer visionEmbodied AIMachine learning
J
Jing Tang
The Hong Kong University of Science and Technology