🤖 AI Summary
Reinforcement learning with verifiable rewards (RLVR) suffers from low data efficiency and high rollout overhead in reasoning tasks. To address these challenges, we propose DEPO, an optimization framework that establishes an offline-online collaborative data filtering paradigm. In the offline phase, high-quality samples are selected based on three criteria: diversity, influence, and difficulty. In the online phase, a sample-level explorability metric guides policy updates, while a replay mechanism compensates for sparse exploration signals. DEPO significantly improves training efficiency and convergence stability. Evaluated on five reasoning benchmarks, it achieves superior performance using only 20% of the training data compared to full-data GRPO. On AIME24 and AIME25, it delivers 1.85× and 1.66× training speedups, respectively. Notably, DEPO is the first method to jointly optimize for high data efficiency and strong generalization performance in RLVR-based reasoning.
📝 Abstract
Recent advances in large reasoning models have leveraged reinforcement learning with verifiable rewards (RLVR) to improve reasoning capabilities. However, scaling these methods typically requires extensive rollout computation and large datasets, leading to high training costs and low data efficiency. To mitigate this issue, we propose DEPO, a Data-Efficient Policy Optimization pipeline that combines optimized strategies for both offline and online data selection. In the offline phase, we curate a high-quality subset of training samples based on diversity, influence, and appropriate difficulty. During online RLVR training, we introduce a sample-level explorability metric to dynamically filter samples with low exploration potential, thereby reducing substantial rollout computational costs. Furthermore, we incorporate a replay mechanism for under-explored samples to ensure adequate training, which enhances the model's final convergence performance. Experiments across five reasoning benchmarks show that DEPO consistently outperforms existing methods in both offline and online data selection scenarios. Notably, using only 20% of the training data, our approach achieves a 1.85 times speed-up on AIME24 and a 1.66 times speed-up on AIME25 compared to GRPO trained on the full dataset.