🤖 AI Summary
Reinforcement learning with verifiable rewards (RLVR) for large language models (LLMs) suffers from training instability and low sample efficiency in complex reasoning tasks—particularly for smaller LLMs—due to task difficulty often exceeding the model’s current capability, resulting in sparse rewards and learning stagnation.
Method: We propose an adaptive-guidance hybrid policy optimization framework that introduces a difficulty-aware mechanism: dynamically adjusting prompts to calibrate task difficulty online, while synergistically integrating imitation learning and reinforcement learning to construct a smooth curriculum. Our approach unifies verifiable-reward RL, prompt-adaptive optimization, hybrid policy gradients, and curriculum learning.
Contribution/Results: Evaluated on six mathematical reasoning benchmarks, our method achieves an average improvement of 5.0% over strong baselines, significantly outperforming state-of-the-art policy learning methods. It enhances both training stability and downstream reasoning performance, especially for resource-constrained LLMs.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a powerful paradigm for facilitating the self-improvement of large language models (LLMs), particularly in the domain of complex reasoning tasks. However, prevailing on-policy RL methods often contend with significant training instability and inefficiency. This is primarily due to a capacity-difficulty mismatch, where the complexity of training data frequently outpaces the model's current capabilities, leading to critically sparse reward signals and stalled learning progress. This challenge is particularly acute for smaller, more resource-efficient LLMs. To overcome this, we introduce the Guided Hybrid Policy Optimization (GHPO), a novel difficulty-aware reinforcement learning framework. GHPO dynamically calibrates task difficulty by employing adaptive prompt refinement to provide targeted guidance. This unique approach adaptively balances direct imitation learning for problems currently beyond the model's reach with exploration-based reinforcement learning for more manageable tasks, effectively creating a smooth and optimized learning curriculum. Extensive experiments demonstrate that GHPO achieves an average performance gain of approximately 5% across six challenging mathematics benchmarks, consistently outperforming strong on-policy reinforcement learning and curriculum learning baselines. Further analysis confirms that our framework significantly enhances both training stability and final reasoning performance, thus offering a scalable and efficient solution for developing powerful and robust reasoning models.