Efficient Reinforcement Finetuning via Adaptive Curriculum Learning

πŸ“… 2025-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Reinforcement fine-tuning (RFT) suffers from low sample and computational efficiency when enhancing large language models’ mathematical reasoning capabilities. Method: This paper proposes AdaRFT, an adaptive curriculum learning framework for RFT. Its core innovation is a novel, real-time reward-driven dynamic difficulty adjustment mechanism that requires no modification to the reward function or model architecture. It optimizes training via online difficulty estimation, adaptive sampling, and curriculum scheduling, implemented within a lightweight PPO framework. Results: Evaluated on AMC, AIME, and IMO-level benchmarks, AdaRFT reduces training steps by 50% while achieving significant accuracy gains. It demonstrates strong generalization across diverse data distributions and model scales, confirming its robustness and scalability.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement finetuning (RFT) has shown great potential for enhancing the mathematical reasoning capabilities of large language models (LLMs), but it is often sample- and compute-inefficient, requiring extensive training. In this work, we introduce AdaRFT (Adaptive Curriculum Reinforcement Finetuning), a method that significantly improves both the efficiency and final accuracy of RFT through adaptive curriculum learning. AdaRFT dynamically adjusts the difficulty of training problems based on the model's recent reward signals, ensuring that the model consistently trains on tasks that are challenging but solvable. This adaptive sampling strategy accelerates learning by maintaining an optimal difficulty range, avoiding wasted computation on problems that are too easy or too hard. AdaRFT requires only a lightweight extension to standard RFT algorithms like Proximal Policy Optimization (PPO), without modifying the reward function or model architecture. Experiments on competition-level math datasets-including AMC, AIME, and IMO-style problems-demonstrate that AdaRFT significantly improves both training efficiency and reasoning performance. We evaluate AdaRFT across multiple data distributions and model sizes, showing that it reduces the number of training steps by up to 2x and improves accuracy by a considerable margin, offering a more scalable and effective RFT framework.
Problem

Research questions and friction points this paper is trying to address.

Improving efficiency of reinforcement finetuning for LLMs
Adaptive curriculum learning for optimal difficulty adjustment
Enhancing mathematical reasoning with reduced training steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive curriculum learning for efficient training
Dynamic difficulty adjustment based on rewards
Lightweight extension to standard RFT algorithms
πŸ”Ž Similar Papers
No similar papers found.
Taiwei Shi
Taiwei Shi
University of Southern California
Natural Language ProcessingComputational Social ScienceMachine Learning
Y
Yiyang Wu
University of Southern California
L
Linxin Song
University of Southern California
T
Tianyi Zhou
University of Maryland, College Park
Jieyu Zhao
Jieyu Zhao
Assistant Professor at USC
Natural Language ProcessingMachine LearningFairness in AI