π€ AI Summary
Reinforcement fine-tuning (RFT) suffers from low sample and computational efficiency when enhancing large language modelsβ mathematical reasoning capabilities.
Method: This paper proposes AdaRFT, an adaptive curriculum learning framework for RFT. Its core innovation is a novel, real-time reward-driven dynamic difficulty adjustment mechanism that requires no modification to the reward function or model architecture. It optimizes training via online difficulty estimation, adaptive sampling, and curriculum scheduling, implemented within a lightweight PPO framework.
Results: Evaluated on AMC, AIME, and IMO-level benchmarks, AdaRFT reduces training steps by 50% while achieving significant accuracy gains. It demonstrates strong generalization across diverse data distributions and model scales, confirming its robustness and scalability.
π Abstract
Reinforcement finetuning (RFT) has shown great potential for enhancing the mathematical reasoning capabilities of large language models (LLMs), but it is often sample- and compute-inefficient, requiring extensive training. In this work, we introduce AdaRFT (Adaptive Curriculum Reinforcement Finetuning), a method that significantly improves both the efficiency and final accuracy of RFT through adaptive curriculum learning. AdaRFT dynamically adjusts the difficulty of training problems based on the model's recent reward signals, ensuring that the model consistently trains on tasks that are challenging but solvable. This adaptive sampling strategy accelerates learning by maintaining an optimal difficulty range, avoiding wasted computation on problems that are too easy or too hard. AdaRFT requires only a lightweight extension to standard RFT algorithms like Proximal Policy Optimization (PPO), without modifying the reward function or model architecture. Experiments on competition-level math datasets-including AMC, AIME, and IMO-style problems-demonstrate that AdaRFT significantly improves both training efficiency and reasoning performance. We evaluate AdaRFT across multiple data distributions and model sizes, showing that it reduces the number of training steps by up to 2x and improves accuracy by a considerable margin, offering a more scalable and effective RFT framework.