🤖 AI Summary
To address computational inefficiency caused by uniform sampling in reinforcement learning (RL) training of large language models (LLMs), this paper proposes an adaptive prompt selection framework based on online curriculum learning. Theoretically, we prove that focusing on moderately difficult samples significantly improves the signal-to-noise ratio (SNR) of gradient estimates. We further design a parameter-free, plug-and-play mechanism for online difficulty estimation and dynamic sampling. The framework is compatible with mainstream RL algorithms, requires no human intervention, and achieves 2–6× speedup in training while preserving zero loss in inference accuracy. Our core contributions are threefold: (1) establishing the first theoretical link between sample difficulty and gradient SNR; (2) introducing the first provably convergent online curriculum learning paradigm for prompt selection; and (3) delivering an efficient, general-purpose, and out-of-the-box solution for accelerating RL training of LLMs.
📝 Abstract
Training large language models with reinforcement learning (RL) against verifiable rewards significantly enhances their reasoning abilities, yet remains computationally expensive due to inefficient uniform prompt sampling. We introduce Selective Prompting with Efficient Estimation of Difficulty (SPEED), an adaptive online RL curriculum that selectively chooses training examples of intermediate difficulty to maximize learning efficiency. Theoretically, we establish that intermediate-difficulty prompts improve the gradient estimator's signal-to-noise ratio, accelerating convergence. Empirically, our efficient implementation leads to 2x to 6x faster training without degrading accuracy, requires no manual tuning, and integrates seamlessly into standard RL algorithms.