Rethinking the Sampling Criteria in Reinforcement Learning for LLM Reasoning: A Competence-Difficulty Alignment Perspective

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) for enhancing large language models’ (LLMs) reasoning capabilities suffers from low sample efficiency during rollout, unstable problem difficulty estimation, and misalignment between estimated difficulty and model capability. To address these issues, we propose the Capability-Difficulty Alignment Sampling (CDAS) framework. First, we derive a robust difficulty estimate by aggregating historical rollout performance discrepancies. Second, we formulate a fixed-point system to quantitatively characterize the model’s current reasoning capability. Third, we implement dynamic weighted sampling to achieve real-time alignment between capability and difficulty, integrated with RLHF-style policy optimization. CDAS significantly improves training stability and convergence speed. Empirically, it achieves state-of-the-art average accuracy across multiple mathematical reasoning benchmarks and attains 2.33× higher training efficiency than the prior SOTA method, Dynamic Sampling.

Technology Category

Application Category

📝 Abstract
Reinforcement learning exhibits potential in enhancing the reasoning abilities of large language models, yet it is hard to scale for the low sample efficiency during the rollout phase. Existing methods attempt to improve efficiency by scheduling problems based on problem difficulties. However, these approaches suffer from unstable and biased estimations of problem difficulty and fail to capture the alignment between model competence and problem difficulty in RL training, leading to suboptimal results. To tackle these limitations, this paper introduces extbf{C}ompetence- extbf{D}ifficulty extbf{A}lignment extbf{S}ampling ( extbf{CDAS}), which enables accurate and stable estimation of problem difficulties by aggregating historical performance discrepancies of problems. Then the model competence is quantified to adaptively select problems whose difficulty is in alignment with the model's current competence using a fixed-point system. Experimental results across a range of challenging mathematical benchmarks show that CDAS achieves great improvements in both accuracy and efficiency. CDAS attains the highest average accuracy against baselines and exhibits significant speed advantages compared to Dynamic Sampling, a competitive strategy in DAPO, which is extbf{2.33} times slower than CDAS.
Problem

Research questions and friction points this paper is trying to address.

Improving sample efficiency in RL for LLM reasoning
Aligning model competence with problem difficulty dynamically
Accurate stable difficulty estimation via historical performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Accurate difficulty estimation via historical performance discrepancies
Quantifies model competence for adaptive problem selection
Aligns problem difficulty with model competence dynamically
🔎 Similar Papers
No similar papers found.
Deyang Kong
Deyang Kong
Peking University
Natural Language Processing
Q
Qi Guo
Meituan Group, Beijing, China; National Engineering Research Center for Software Engineering, Peking University, Beijing, China
Xiangyu Xi
Xiangyu Xi
Peking University; Meituan Group
natural language processingevent extractioninformation extractiontask-oriented dialogue
W
Wei Wang
Meituan Group, Beijing, China
Jingang Wang
Jingang Wang
Meituan
Information RetrievalNatural Language ProcessingMachine Translation
X
Xunliang Cai
Meituan Group, Beijing, China
Shikun Zhang
Shikun Zhang
北京大学
W
Wei Ye
National Engineering Research Center for Software Engineering, Peking University, Beijing, China