Towards High Data Efficiency in Reinforcement Learning with Verifiable Reward

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning with verifiable rewards (RLVR) suffers from low data efficiency and high rollout overhead in reasoning tasks. To address these challenges, we propose DEPO, an optimization framework that establishes an offline-online collaborative data filtering paradigm. In the offline phase, high-quality samples are selected based on three criteria: diversity, influence, and difficulty. In the online phase, a sample-level explorability metric guides policy updates, while a replay mechanism compensates for sparse exploration signals. DEPO significantly improves training efficiency and convergence stability. Evaluated on five reasoning benchmarks, it achieves superior performance using only 20% of the training data compared to full-data GRPO. On AIME24 and AIME25, it delivers 1.85× and 1.66× training speedups, respectively. Notably, DEPO is the first method to jointly optimize for high data efficiency and strong generalization performance in RLVR-based reasoning.

Technology Category

Application Category

📝 Abstract
Recent advances in large reasoning models have leveraged reinforcement learning with verifiable rewards (RLVR) to improve reasoning capabilities. However, scaling these methods typically requires extensive rollout computation and large datasets, leading to high training costs and low data efficiency. To mitigate this issue, we propose DEPO, a Data-Efficient Policy Optimization pipeline that combines optimized strategies for both offline and online data selection. In the offline phase, we curate a high-quality subset of training samples based on diversity, influence, and appropriate difficulty. During online RLVR training, we introduce a sample-level explorability metric to dynamically filter samples with low exploration potential, thereby reducing substantial rollout computational costs. Furthermore, we incorporate a replay mechanism for under-explored samples to ensure adequate training, which enhances the model's final convergence performance. Experiments across five reasoning benchmarks show that DEPO consistently outperforms existing methods in both offline and online data selection scenarios. Notably, using only 20% of the training data, our approach achieves a 1.85 times speed-up on AIME24 and a 1.66 times speed-up on AIME25 compared to GRPO trained on the full dataset.
Problem

Research questions and friction points this paper is trying to address.

Reducing high training costs in reinforcement learning with verifiable rewards
Improving data efficiency by selecting high-quality training samples
Minimizing rollout computation through dynamic sample filtering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Offline data selection based on diversity and difficulty
Online sample filtering using explorability metric
Replay mechanism for under-explored samples
X
Xinyu Tang
Gaoling School of Artificial Intelligence, Renmin University of China
Z
Zhenduo Zhang
Ant Group
Yurou Liu
Yurou Liu
Renmin University of China
AI4Science
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
Z
Zujie Wen
Ant Group
Z
Zhiqiang Zhang
Ant Group
J
Jun Zhou
Ant Group