🤖 AI Summary
Existing population-based reinforcement learning methods, such as GRPO and GMPO, rely on fixed aggregation geometries that struggle to accommodate the heterogeneity and dynamic shifts inherent in trajectory evolution. This work proposes Power-Mean Policy Optimization (PMPO), a novel framework that generalizes the aggregation operation in population-based RL to a tunable power-mean form. By introducing an exponent \( p \), PMPO unifies GRPO and GMPO as special cases and incorporates a Clip-aware Effective Sample Size (ESS) mechanism that dynamically adjusts \( p \) based on the trajectory clipping ratio, enabling continuous and adaptive interpolation between arithmetic and geometric means. Experimental results demonstrate that PMPO significantly outperforms strong existing baselines across multiple mathematical reasoning benchmarks, confirming its enhanced stability and generalization capability.
📝 Abstract
Group-based reinforcement learning has evolved from the arithmetic mean of GRPO to the geometric mean of GMPO. While GMPO improves stability by constraining a conservative objective, it shares a fundamental limitation with GRPO: reliance on a fixed aggregation geometry that ignores the evolving and heterogeneous nature of each trajectory. In this work, we unify these approaches under Power-Mean Policy Optimization (PMPO), a generalized framework that parameterizes the aggregation geometry via the power-mean geometry exponent p. Within this framework, GRPO and GMPO are recovered as special cases. Theoretically, we demonstrate that adjusting p modulates the concentration of gradient updates, effectively reweighting tokens based on their advantage contribution. To determine p adaptively, we introduce a Clip-aware Effective Sample Size (ESS) mechanism. Specifically, we propose a deterministic rule that maps a trajectory clipping fraction to a target ESS. Then, we solve for the specific p to align the trajectory induced ESS with this target one. This allows PMPO to dynamically transition between the aggressive arithmetic mean for reliable trajectories and the conservative geometric mean for unstable ones. Experiments on multiple mathematical reasoning benchmarks demonstrate that PMPO outperforms strong baselines.