Learning to Reason at the Frontier of Learnability

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In reinforcement learning (RL) fine-tuning of large language models (LLMs) for mathematical reasoning, training signals are sparse—many problems yield uniformly successful or failed rollouts in a single sampling batch, resulting in uninformative gradient updates. Method: We propose a learnability-aware dynamic curriculum learning framework. Its core innovation is the first integration of “sampling for improved learnability” into LLM RL fine-tuning, using success-rate variance as a learnability metric to dynamically identify and prioritize problems at the learning frontier—i.e., those with partial but unstable success. The method integrates with PPO/VinePPO, incorporating online variance estimation, adaptive problem sampling, and curriculum scheduling. Results: Evaluated across multiple mathematical reasoning benchmarks and RL algorithms, our approach significantly improves training efficiency and final reasoning performance, empirically validating learnability-guided curricula as an effective mechanism for advancing LLM mathematical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Reinforcement learning is now widely adopted as the final stage of large language model training, especially for reasoning-style tasks such as maths problems. Typically, models attempt each question many times during a single training step and attempt to learn from their successes and failures. However, we demonstrate that throughout training with two popular algorithms (PPO and VinePPO) on two widely used datasets, many questions are either solved by all attempts - meaning they are already learned - or by none - providing no meaningful training signal. To address this, we adapt a method from the reinforcement learning literature - sampling for learnability - and apply it to the reinforcement learning stage of LLM training. Our curriculum prioritises questions with high variance of success, i.e. those where the agent sometimes succeeds, but not always. Our findings demonstrate that this curriculum consistently boosts training performance across multiple algorithms and datasets, paving the way for more efficient and effective reinforcement learning in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhance reinforcement learning efficiency
Address learnability variance in tasks
Optimize LLM training curriculum
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sampling for learnability
Prioritizing high variance questions
Boosting LLM training efficiency
🔎 Similar Papers
No similar papers found.