Accelerating RLHF Training with Reward Variance Increase

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement Learning from Human Feedback (RLHF) suffers from low training efficiency, particularly in the GRPO framework, where insufficient reward variance under the initial policy impedes convergence. Method: We propose a controllable reward-variance amplification method: (i) we construct the first adjustment model that provably preserves preference-order consistency while strictly increasing reward variance; (ii) we design an *O*(*n* log *n*) globally optimal algorithm to solve its NP-hard non-convex optimization problem; and (iii) we integrate extremal-point structural analysis with reward rescaling to enhance GRPO into GRPOVI. Contributions/Results: We theoretically prove that our method leaves the expected reward and relative preferences invariant while significantly accelerating convergence. Empirical evaluation demonstrates improved convergence speed across multiple LLM alignment tasks. Moreover, our framework provides the first formal explanation for the effectiveness of rule-based rewards in DeepSeek-R1.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback (RLHF) is an essential technique for ensuring that large language models (LLMs) are aligned with human values and preferences during the post-training phase. As an effective RLHF approach, group relative policy optimization (GRPO) has demonstrated success in many LLM-based applications. However, efficient GRPO-based RLHF training remains a challenge. Recent studies reveal that a higher reward variance of the initial policy model leads to faster RLHF training. Inspired by this finding, we propose a practical reward adjustment model to accelerate RLHF training by provably increasing the reward variance and preserving the relative preferences and reward expectation. Our reward adjustment method inherently poses a nonconvex optimization problem, which is NP-hard to solve in general. To overcome the computational challenges, we design a novel $O(n log n)$ algorithm to find a global solution of the nonconvex reward adjustment model by explicitly characterizing the extreme points of the feasible set. As an important application, we naturally integrate this reward adjustment model into the GRPO algorithm, leading to a more efficient GRPO with reward variance increase (GRPOVI) algorithm for RLHF training. As an interesting byproduct, we provide an indirect explanation for the empirical effectiveness of GRPO with rule-based reward for RLHF training, as demonstrated in DeepSeek-R1. Experiment results demonstrate that the GRPOVI algorithm can significantly improve the RLHF training efficiency compared to the original GRPO algorithm.
Problem

Research questions and friction points this paper is trying to address.

Accelerating RLHF training via reward variance optimization
Solving nonconvex optimization in reward adjustment efficiently
Enhancing GRPO algorithm with variance increase for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes reward adjustment model to increase variance
Designs O(n log n) algorithm for nonconvex optimization
Integrates adjustment into GRPO for faster RLHF
🔎 Similar Papers
No similar papers found.