Efficient Reinforcement Learning for Large Language Models with Intrinsic Exploration

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low data efficiency and high sampling overhead of large language models (LLMs) in verifiable-reward reinforcement learning, this paper proposes an efficient training framework integrating adaptive curriculum learning with exploration enhancement. Methodologically, it (1) dynamically constructs a difficulty-graded training curriculum based on prompt perplexity, and (2) introduces a relative entropy difference amplification mechanism to prioritize weighted sampling of highly exploratory rollouts. By modeling policy distribution shifts and incorporating rollout-difference-weighted sampling, the approach significantly improves sample utilization efficiency. Empirical evaluation on mathematical reasoning tasks using Qwen and Llama series models demonstrates that the method achieves comparable or superior performance using only one-third of the rollouts required by baseline methods—substantially reducing computational cost while preserving strong reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has improved the reasoning ability of large language models, yet training remains costly because many rollouts contribute little to optimization, considering the amount of computation required. This study investigates how simply leveraging intrinsic data properties, almost free benefit during training, can improve data efficiency for RLVR. We propose PREPO with two complementary components. First, we adopt prompt perplexity as an indicator of model adaptability in learning, enabling the model to progress from well-understood contexts to more challenging ones. Second, we amplify the discrepancy among the rollouts by differentiating their relative entropy, and prioritize sequences that exhibit a higher degree of exploration. Together, these mechanisms reduce rollout demand while preserving competitive performance. On the Qwen and Llama models, PREPO achieves effective results on mathematical reasoning benchmarks with up to 3 times fewer rollouts than the baselines. Beyond empirical gains, we provide theoretical and in-depth analyses explaining the underlying rationale of our method to improve the data efficiency of RLVR.
Problem

Research questions and friction points this paper is trying to address.

Improving data efficiency in reinforcement learning for language models
Reducing rollout demand while maintaining competitive performance
Leveraging intrinsic data properties to optimize training computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses prompt perplexity to guide learning progression
Amplifies rollout discrepancy via relative entropy
Reduces rollout demand while maintaining performance
Y
Yan Sun
National University of Singapore
J
Jia Guo
Ant Group
Stanley Kok
Stanley Kok
National University of Singapore
Artificial IntelligenceMachine LearningInformation Systems
Z
Zihao Wang
Ant Group
Z
Zujie Wen
Ant Group
Z
Zhiqiang Zhang
Ant Group