🤖 AI Summary
This work addresses the challenge in reinforcement learning for training large language models on difficult reasoning tasks, where insufficient exploration often leads to a lack of non-zero reward signals and consequent learning failure. The authors propose Privileged-policy Internal Exploration (POPE), a novel approach that leverages human- or oracle-provided solution prefixes as exploration guidance—rather than as training targets—to inject privileged information directly into the policy. This internal guidance effectively steers exploration toward obtaining meaningful rewards while avoiding the interference caused by mixing easy and hard examples during training. Through a synergistic mechanism between instruction following and reasoning, POPE enables successful transfer from the guided policy back to the original task. Experiments demonstrate that POPE substantially expands the set of solvable problems and achieves significant performance gains across multiple challenging reasoning benchmarks.
📝 Abstract
Reinforcement learning (RL) has improved the reasoning abilities of large language models (LLMs), yet state-of-the-art methods still fail to learn on many training problems. On hard problems, on-policy RL rarely explores even a single correct rollout, yielding zero reward and no learning signal for driving improvement. We find that natural solutions to remedy this exploration problem from classical RL, such as entropy bonuses, more permissive clipping of the importance ratio, or direct optimization of pass@k objectives, do not resolve this issue and often destabilize optimization without improving solvability. A natural alternative is to leverage transfer from easier problems. However, we show that mixing easy and hard problems during RL training is counterproductive due to ray interference, where optimization focuses on already-solvable problems in a way that actively inhibits progress on harder ones. To address this challenge, we introduce Privileged On-Policy Exploration (POPE), an approach that leverages human- or other oracle solutions as privileged information to guide exploration on hard problems, unlike methods that use oracle solutions as training targets (e.g., off-policy RL methods or warmstarting from SFT). POPE augments hard problems with prefixes of oracle solutions, enabling RL to obtain non-zero rewards during guided rollouts. Crucially, the resulting behaviors transfer back to the original, unguided problems through a synergy between instruction-following and reasoning. Empirically, POPE expands the set of solvable problems and substantially improves performance on challenging reasoning benchmarks.