🤖 AI Summary
This paper addresses the limitation of the GRPO algorithm—its restriction to on-policy training—by systematically proposing and validating its first off-policy variant. Methodologically, we introduce a clipped surrogate objective function, adapt GRPO to the off-policy setting within the PPO framework, employ offline advantage estimation, and incorporate verifiable reward evaluation. We theoretically prove that this objective guarantees monotonic improvement in expected reward. Our key contributions are threefold: (1) the first successful extension of GRPO to the off-policy paradigm; (2) theoretical analysis demonstrating superior training stability and higher sample and memory efficiency compared to on-policy GRPO; and (3) empirical validation showing that off-policy GRPO matches or significantly outperforms the original on-policy version across multiple benchmark tasks.
📝 Abstract
We revisit Group Relative Policy Optimization (GRPO) in both on-policy and off-policy optimization regimes. Our motivation comes from recent work on off-policy Proximal Policy Optimization (PPO), which improves training stability, sampling efficiency, and memory usage. In addition, a recent analysis of GRPO suggests that estimating the advantage function with off-policy samples could be beneficial. Building on these observations, we adapt GRPO to the off-policy setting. We show that both on-policy and off-policy GRPO objectives yield an improvement in the reward. This result motivates the use of clipped surrogate objectives in the off-policy version of GRPO. We then compare the empirical performance of reinforcement learning with verifiable rewards in post-training using both GRPO variants. Our results show that off-policy GRPO either significantly outperforms or performs on par with its on-policy counterpart.