GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning with verifiable rewards (RLVR) for large language models (LLMs) suffers from training instability and low sample efficiency in complex reasoning tasks—particularly for smaller LLMs—due to task difficulty often exceeding the model’s current capability, resulting in sparse rewards and learning stagnation. Method: We propose an adaptive-guidance hybrid policy optimization framework that introduces a difficulty-aware mechanism: dynamically adjusting prompts to calibrate task difficulty online, while synergistically integrating imitation learning and reinforcement learning to construct a smooth curriculum. Our approach unifies verifiable-reward RL, prompt-adaptive optimization, hybrid policy gradients, and curriculum learning. Contribution/Results: Evaluated on six mathematical reasoning benchmarks, our method achieves an average improvement of 5.0% over strong baselines, significantly outperforming state-of-the-art policy learning methods. It enhances both training stability and downstream reasoning performance, especially for resource-constrained LLMs.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a powerful paradigm for facilitating the self-improvement of large language models (LLMs), particularly in the domain of complex reasoning tasks. However, prevailing on-policy RL methods often contend with significant training instability and inefficiency. This is primarily due to a capacity-difficulty mismatch, where the complexity of training data frequently outpaces the model's current capabilities, leading to critically sparse reward signals and stalled learning progress. This challenge is particularly acute for smaller, more resource-efficient LLMs. To overcome this, we introduce the Guided Hybrid Policy Optimization (GHPO), a novel difficulty-aware reinforcement learning framework. GHPO dynamically calibrates task difficulty by employing adaptive prompt refinement to provide targeted guidance. This unique approach adaptively balances direct imitation learning for problems currently beyond the model's reach with exploration-based reinforcement learning for more manageable tasks, effectively creating a smooth and optimized learning curriculum. Extensive experiments demonstrate that GHPO achieves an average performance gain of approximately 5% across six challenging mathematics benchmarks, consistently outperforming strong on-policy reinforcement learning and curriculum learning baselines. Further analysis confirms that our framework significantly enhances both training stability and final reasoning performance, thus offering a scalable and efficient solution for developing powerful and robust reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Addresses training instability in LLM reinforcement learning
Solves capacity-difficulty mismatch in complex reasoning tasks
Improves efficiency for smaller resource-efficient language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive prompt refinement for difficulty calibration
Hybrid policy balancing imitation and exploration
Dynamic curriculum for stable efficient learning
🔎 Similar Papers
No similar papers found.
Z
Ziru Liu
Huawei Research
C
Cheng Gong
City University of Hong Kong
Xinyu Fu
Xinyu Fu
Hong Kong Research Center, Huawei
Large Language ModelsMLLMAgentsHeterogeneous Graphs
Yaofang Liu
Yaofang Liu
City University of Hong Kong
Diffusion ModelsVideo GenerationImage Processing
R
Ran Chen
Huawei Noah’s Ark Lab
S
Shoubo Hu
Huawei Noah’s Ark Lab
S
Suiyun Zhang
Huawei Research
R
Rui Liu
Huawei Research
Qingfu Zhang
Qingfu Zhang
Chair Professor, FIEEE, City University of Hong Kong
evolutionary computationmultiobjective optimizationcomputational intelligence
D
Dandan Tu
Huawei Research