Prompt Curriculum Learning for Efficient LLM Post-Training

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address training sensitivity and inefficiency in LLM post-training caused by suboptimal prompt difficulty selection, this paper proposes an online curriculum learning framework powered by a value model. The method dynamically identifies and selects medium-difficulty prompts—bypassing costly traditional rollouts—while integrating a lightweight RL architecture, policy-consistent difficulty estimation, and an adaptive batching mechanism for efficient and stable optimization. Its core innovation lies in the first use of a value model for real-time prompt difficulty modeling, enabling rollout-free curriculum-based reinforcement learning. On MATH and DeepScaleR benchmarks, prompt filtering accelerates by 12.1× and 16.9×, respectively, while achieving either superior or comparable reasoning performance, significantly reducing total training time.

Technology Category

Application Category

📝 Abstract
We introduce Prompt Curriculum Learning (PCL), a lightweight reinforcement learning (RL) algorithm that selects intermediate-difficulty prompts using a learned value model to post-train language models. Since post-training LLMs via RL remains sensitive to batching and prompt selection strategies, we first conduct a series of systematic experiments where we (1) determine the optimal training batch size that balances generation efficiency and gradient quality and (2) establish the importance of focusing on prompts of intermediate difficulty for the policy. We build upon these results to design PCL, which identifies prompts of intermediate difficulty for the current policy in an on-policy manner by using a value model that is concurrently updated based on the current policy. By focusing on informative prompts that yield high effective ratios, PCL achieves either the highest performance or requires significantly less time to reach comparable performance to its counterparts. Compared to rollout-based filtering methods, PCL avoids costly rollouts and achieves $12.1 imes$ and $16.9 imes$ faster speed on identifying intermediate-difficulty prompts when training on MATH and DeepScaleR, respectively. We further demonstrate that our value model accurately predicts prompt difficulty and allows PCL to focus on progressively more challenging prompts during RL. Our results present a new methodology that delivers improved tradeoff between upper-bound performance and efficiency for reasoning-focused RL.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompt selection strategies for efficient LLM post-training via reinforcement learning
Developing curriculum learning to identify intermediate-difficulty prompts during training
Improving tradeoff between performance and training efficiency for reasoning-focused RL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight RL algorithm for post-training language models
Selects intermediate-difficulty prompts using learned value model
Achieves faster training speed without costly rollouts