Demystifying Group Relative Policy Optimization: Its Policy Gradient is a U-Statistic

📅 2026-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of theoretical understanding of Group Relative Policy Optimization (GRPO), whose statistical properties and performance bounds remain unclear. By adopting a U-statistics perspective, we develop a unified analytical framework that reveals the GRPO policy gradient as a class of U-statistics and establishes its asymptotic equivalence to the ideal policy gradient algorithm. Leveraging this insight, we derive the mean squared error, finite-sample error bounds, and asymptotic distribution of GRPO, and propose a universal group-size scaling law. Our theoretical results demonstrate that GRPO inherits desirable properties of the ideal algorithm and that the optimal group size exhibits universality across settings. Experimental evaluations corroborate the accuracy of our theoretical predictions.

Technology Category

Application Category

📝 Abstract
Group relative policy optimization (GRPO), a core methodological component of DeepSeekMath and DeepSeek-R1, has emerged as a cornerstone for scaling reasoning capabilities of large language models. Despite its widespread adoption and the proliferation of follow-up works, the theoretical properties of GRPO remain less studied. This paper provides a unified framework to understand GRPO through the lens of classical U-statistics. We demonstrate that the GRPO policy gradient is inherently a U-statistic, allowing us to characterize its mean squared error (MSE), derive the finite-sample error bound and asymptotic distribution of the suboptimality gap for its learned policy. Our findings reveal that GRPO is asymptotically equivalent to an oracle policy gradient algorithm -- one with access to a value function that quantifies the goodness of its learning policy at each training iteration -- and achieves asymptotically optimal performance within a broad class of policy gradient algorithms. Furthermore, we establish a universal scaling law that offers principled guidance for selecting the optimal group size. Empirical experiments further validate our theoretical findings, demonstrating that the optimal group size is universal, and verify the oracle property of GRPO.
Problem

Research questions and friction points this paper is trying to address.

Group Relative Policy Optimization
U-statistic
policy gradient
theoretical analysis
scaling law
Innovation

Methods, ideas, or system contributions that make the work stand out.

U-statistic
Group Relative Policy Optimization
policy gradient
asymptotic optimality
scaling law
🔎 Similar Papers
Hongyi Zhou
Hongyi Zhou
Karlsruhe Institute of Technology
reinforcement learningimitation learningrobotics
K
Kai Ye
Department of Statistics, London School of Economics and Political Science
E
Erhan Xu
Department of Statistics, London School of Economics and Political Science
Jin Zhu
Jin Zhu
School of Mathematics, University of Birmingham
machine learning
S
Shijin Gong
School of Management, University of Science and Technology of China
Chengchun Shi
Chengchun Shi
London School of Economics and Political Science
Large Language ModelsReinforcement LearningStatistics