🤖 AI Summary
This work addresses a critical limitation in large language models (LLMs) applied to reinforcement learning for reasoning: the shared policy between training trajectories and inference responses induces objective conflict, constraining exploration and degrading reasoning performance. To resolve this, the authors propose R²PO, the first method to explicitly decouple these two processes. By introducing a lightweight residual rollout-head, R²PO enables controllable diversification of training trajectories while preserving the stability of the inference policy. This approach effectively mitigates the objective conflict, yielding consistent performance gains—improving average accuracy by 3.1% on MATH-500 and 2.4% on APPS—and substantially reducing format errors and length deviations, thereby enhancing training stability.
📝 Abstract
Reinforcement learning has become a central paradigm for improving LLM reasoning. However, existing methods use a single policy to produce both inference responses and training optimization trajectories. The objective conflict between generating stable inference responses and diverse training trajectories leads to insufficient exploration, which harms reasoning capability. In this paper, to address the problem, we propose R$^2$PO (Residual Rollout Policy Optimization), which introduces a lightweight Residual Rollout-Head atop the policy to decouple training trajectories from inference responses, enabling controlled trajectory diversification during training while keeping inference generation stable. Experiments across multiple benchmarks show that our method consistently outperforms baselines, achieving average accuracy gains of 3.4% on MATH-500 and 1.3% on APPS, while also reducing formatting errors and mitigating length bias for stable optimization. Our code is publicly available at https://github.com/RRPO-ARR/Code.