🤖 AI Summary
This work addresses the high variance and vanishing gradient issues in Group Relative Policy Optimization (GRPO) under small-batch group settings and reward saturation scenarios. To this end, the authors propose the Empirical Bayes Policy Optimization (EBPO) framework, which dynamically integrates local group baselines with global policy statistics via empirical Bayes shrinkage estimation to construct a low-variance, high-stability policy gradient estimator. Theoretical analysis demonstrates that EBPO achieves lower mean squared error, exhibits bounded entropy decay, and preserves non-zero penalty signals even in failure cases. Experimental results show that EBPO significantly outperforms GRPO and other baselines on benchmarks such as AIME and OlympiadBench, with particularly strong performance in small-batch training and difficulty-tiered curriculum learning settings.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has proven effective for enhancing the reasoning capabilities of Large Language Models (LLMs). However, dominant approaches like Group Relative Policy Optimization (GRPO) face critical stability challenges: they suffer from high estimator variance under computational constraints (small group sizes) and vanishing gradient signals in saturated failure regimes where all responses yield identical zero rewards. To address this, we propose Empirical Bayes Policy Optimization (EBPO), a novel framework that regularizes local group-based baselines by borrowing strength from the policy's accumulated global statistics. Instead of estimating baselines in isolation, EBPO employs a shrinkage estimator that dynamically balances local group statistics with a global prior updated via Welford's online algorithm. Theoretically, we demonstrate that EBPO guarantees strictly lower Mean Squared Error (MSE), bounded entropy decay, and non-vanishing penalty signals in failure scenarios compared to GRPO. Empirically, EBPO consistently outperforms GRPO and other established baselines across diverse benchmarks, including AIME and OlympiadBench. Notably, EBPO exhibits superior training stability, achieving high-performance gains even with small group sizes, and benefits significantly from difficulty-stratified curriculum learning.