SPEED-RL: Faster Training of Reasoning Models via Online Curriculum Learning

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address computational inefficiency caused by uniform sampling in reinforcement learning (RL) training of large language models (LLMs), this paper proposes an adaptive prompt selection framework based on online curriculum learning. Theoretically, we prove that focusing on moderately difficult samples significantly improves the signal-to-noise ratio (SNR) of gradient estimates. We further design a parameter-free, plug-and-play mechanism for online difficulty estimation and dynamic sampling. The framework is compatible with mainstream RL algorithms, requires no human intervention, and achieves 2–6× speedup in training while preserving zero loss in inference accuracy. Our core contributions are threefold: (1) establishing the first theoretical link between sample difficulty and gradient SNR; (2) introducing the first provably convergent online curriculum learning paradigm for prompt selection; and (3) delivering an efficient, general-purpose, and out-of-the-box solution for accelerating RL training of LLMs.

Technology Category

Application Category

📝 Abstract
Training large language models with reinforcement learning (RL) against verifiable rewards significantly enhances their reasoning abilities, yet remains computationally expensive due to inefficient uniform prompt sampling. We introduce Selective Prompting with Efficient Estimation of Difficulty (SPEED), an adaptive online RL curriculum that selectively chooses training examples of intermediate difficulty to maximize learning efficiency. Theoretically, we establish that intermediate-difficulty prompts improve the gradient estimator's signal-to-noise ratio, accelerating convergence. Empirically, our efficient implementation leads to 2x to 6x faster training without degrading accuracy, requires no manual tuning, and integrates seamlessly into standard RL algorithms.
Problem

Research questions and friction points this paper is trying to address.

Optimize training efficiency for large language models
Reduce computational cost in reinforcement learning
Improve gradient estimator signal-to-noise ratio
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective prompting with adaptive difficulty curriculum
Intermediate-difficulty prompts optimize signal-to-noise ratio
Efficient implementation enables faster training without accuracy loss