BroRL: Scaling Reinforcement Learning via Broadened Exploration

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning for language model reasoning often suffers from insufficient exploration, causing performance saturation after thousands of training steps and markedly diminishing computational efficiency. To address this, we propose a “breadth-first exploration” paradigm: instead of merely increasing training steps, we significantly scale up the number of rollouts per sample to enhance coverage of the policy space. Leveraging the principle of mass conservation, we formulate a mass balance equation and theoretically prove that sufficient exploration ensures monotonic growth in the probability mass assigned to correct tokens. Integrating verifiable reward mechanisms with large-scale rollout sampling, we achieve efficient training on a 1.5B-parameter model. Experiments demonstrate sustained performance gains beyond the saturation point—where ProRL plateaus after 3K steps—achieving new state-of-the-art results for 1.5B models across multiple reasoning benchmarks.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a key ingredient for unlocking complex reasoning capabilities in large language models. Recent work ProRL has shown promise in scaling RL by increasing the number of training steps. However, performance plateaus after thousands of steps, with clear diminishing returns from allocating more computation to additional training. In this work, we investigate a complementary paradigm for scaling RL, BroR-Lincreasing the number of rollouts per example to hundreds to exhaustively Broaden exploration, which yields continuous performance gains beyond the saturation point observed in ProRL when scaling the number of training steps. Our approach is motivated by a mass balance equation analysis allowing us to characterize the rate of change in probability mass for correct and incorrect tokens during the reinforcement process. We show that under a one-step RL assumption, sampled rollout tokens always contribute to correct-mass expansion, while unsampled tokens outside rollouts may lead to gains or losses depending on their distribution and the net reward balance. Importantly, as the number of rollouts per example N increases, the effect of unsampled terms diminishes, ensuring overall correct-mass expansion. To validate our theoretical analysis, we conduct simulations under more relaxed conditions and find that a sufficiently large rollout size N-corresponding to ample exploration-guarantees an increase in the probability mass of all correct tokens. Empirically, BroRL revives models saturated after 3K ProRL training steps and demonstrates robust, continuous improvement, achieving state-of-the-art results for the 1.5B model across diverse benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Overcoming performance plateaus in reinforcement learning for language models
Addressing diminishing returns from increased training steps in ProRL approach
Scaling RL via broadened exploration with more rollouts per example
Innovation

Methods, ideas, or system contributions that make the work stand out.

Broadens exploration by increasing rollouts per example
Uses mass balance analysis for token probability dynamics
Ensures correct token expansion with large rollout sizes
🔎 Similar Papers
No similar papers found.