🤖 AI Summary
To address the low data efficiency and high sampling overhead of large language models (LLMs) in verifiable-reward reinforcement learning, this paper proposes an efficient training framework integrating adaptive curriculum learning with exploration enhancement. Methodologically, it (1) dynamically constructs a difficulty-graded training curriculum based on prompt perplexity, and (2) introduces a relative entropy difference amplification mechanism to prioritize weighted sampling of highly exploratory rollouts. By modeling policy distribution shifts and incorporating rollout-difference-weighted sampling, the approach significantly improves sample utilization efficiency. Empirical evaluation on mathematical reasoning tasks using Qwen and Llama series models demonstrates that the method achieves comparable or superior performance using only one-third of the rollouts required by baseline methods—substantially reducing computational cost while preserving strong reasoning capabilities.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has improved the reasoning ability of large language models, yet training remains costly because many rollouts contribute little to optimization, considering the amount of computation required. This study investigates how simply leveraging intrinsic data properties, almost free benefit during training, can improve data efficiency for RLVR. We propose PREPO with two complementary components. First, we adopt prompt perplexity as an indicator of model adaptability in learning, enabling the model to progress from well-understood contexts to more challenging ones. Second, we amplify the discrepancy among the rollouts by differentiating their relative entropy, and prioritize sequences that exhibit a higher degree of exploration. Together, these mechanisms reduce rollout demand while preserving competitive performance. On the Qwen and Llama models, PREPO achieves effective results on mathematical reasoning benchmarks with up to 3 times fewer rollouts than the baselines. Beyond empirical gains, we provide theoretical and in-depth analyses explaining the underlying rationale of our method to improve the data efficiency of RLVR.