🤖 AI Summary
This work addresses the challenge of inefficient training in traditional reinforcement learning for large language model inference, where sparse correct trajectories lead to vanishing policy gradients and poor sample efficiency. To overcome this, the authors propose PrefixRL, a method that reuses off-policy trajectory prefixes generated from historical samples and combines them with online reinforcement learning to complete subsequent generation. This approach maintains training stability while significantly improving learning efficiency on difficult tasks. Notably, PrefixRL achieves backward generalization to unprefixed tasks after training only on prefixed ones. Experimental results demonstrate that PrefixRL attains target rewards twice as fast and achieves final rewards three times higher on challenging problems, with consistent performance gains across multiple benchmarks and diverse model families, underscoring its strong generalization capability.
📝 Abstract
Typical reinforcement learning (RL) methods for LLM reasoning waste compute on hard problems, where correct on-policy traces are rare, policy gradients vanish, and learning stalls. To bootstrap more efficient RL, we consider reusing old sampling FLOPs (from prior inference or RL training) in the form of off-policy traces. Standard off-policy methods supervise against off-policy data, causing instabilities during RL optimization. We introduce PrefixRL, where we condition on the prefix of successful off-policy traces and run on-policy RL to complete them, side-stepping off-policy instabilities. PrefixRL boosts the learning signal on hard problems by modulating the difficulty of the problem through the off-policy prefix length. We prove that the PrefixRL objective is not only consistent with the standard RL objective but also more sample efficient. Empirically, we discover back-generalization: training only on prefixed problems generalizes to out-of-distribution unprefixed performance, with learned strategies often differing from those in the prefix. In our experiments, we source the off-policy traces by rejection sampling with the base model, creating a self-improvement loop. On hard reasoning problems, PrefixRL reaches the same training reward 2x faster than the strongest baseline (SFT on off-policy data then RL), even after accounting for the compute spent on the initial rejection sampling, and increases the final reward by 3x. The gains transfer to held-out benchmarks, and PrefixRL is still effective when off-policy traces are derived from a different model family, validating its flexibility in practical settings.