🤖 AI Summary
Existing verifiable reward reinforcement learning (RLVR) methods suffer from non-discriminative and myopic rollout usage, leading to noisy supervision signals and poor sample efficiency. This work addresses these limitations by formulating rollout scheduling as a contextual bandit problem and introducing a unified neural scheduling framework that adaptively selects high-value rollouts. The proposed approach enables noise-aware selection within groups and facilitates global reuse of historical rollouts. Theoretically, it provides a sublinear regret bound, ensuring principled performance guarantees. Empirically, it achieves substantial improvements in both performance and training efficiency across six mathematical reasoning benchmarks, demonstrating its generality and effectiveness when integrated with diverse RLVR algorithms.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) is an effective paradigm for improving the reasoning capabilities of large language models. However, existing RLVR methods utilize rollouts in an indiscriminate and short-horizon manner: responses of heterogeneous quality within each prompt are treated uniformly, and historical rollouts are discarded after a single use. This leads to noisy supervision, poor sample efficiency, and suboptimal policy updates. We address these issues by formulating rollout scheduling in RLVR as a contextual bandit problem and proposing a unified neural scheduling framework that adaptively selects high-value rollouts throughout training. Each rollout is treated as an arm whose reward is defined by the induced performance gain between consecutive optimization steps. The resulting scheduler supports both noise-aware intra-group selection and adaptive global reuse of historical rollouts within a single principled framework. We provide theoretical justification by deriving sublinear regret bounds and showing that enlarging the rollout buffer improves the achievable performance upper bound. Experiments on six mathematical reasoning benchmarks demonstrate consistent gains in performance and training efficiency across multiple RLVR optimization methods.