🤖 AI Summary
To address the “learning cliff” problem in large language models (LLMs)—where prolonged zero-reward episodes cause gradient vanishing and reasoning stagnation—this paper proposes a progressive training framework grounded in dynamic diagnosis of learning plateaus. Methodologically, it introduces a novel hierarchical in-context scaffolding mechanism that adaptively injects prompts ranging from abstract to concrete when autonomous learning stalls. The framework integrates verifiable-reward reinforcement learning, Groupwise Relative Policy Optimization (GRPO), and dynamic prompt modulation to balance autonomy and guidance. Evaluated on the AIME24 mathematics benchmark, Qwen2.5-Math-7B achieves a 44.3% improvement in pass@1 over the baseline GRPO, demonstrating substantial breakthroughs in high-difficulty mathematical reasoning. Key contributions include: (i) the first adaptive, hierarchy-aware prompting strategy triggered by learning diagnostics; (ii) synergistic integration of verifiable reward shaping, groupwise policy comparison, and real-time prompt control; and (iii) empirical validation of sustained reasoning capability gains under extreme reasoning hardness.
📝 Abstract
Reinforcement learning from verifiable rewards has emerged as a powerful technique for enhancing the complex reasoning abilities of Large Language Models (LLMs). However, these methods are fundamentally constrained by the''learning cliff''phenomenon: when faced with problems far beyond their current capabilities, models consistently fail, yielding a persistent zero-reward signal. In policy optimization algorithms like GRPO, this collapses the advantage calculation to zero, rendering these difficult problems invisible to the learning gradient and stalling progress. To overcome this, we introduce Scaf-GRPO (Scaffolded Group Relative Policy Optimization), a progressive training framework that strategically provides minimal guidance only when a model's independent learning has plateaued. The framework first diagnoses learning stagnation and then intervenes by injecting tiered in-prompt hints, ranging from abstract concepts to concrete steps, enabling the model to construct a valid solution by itself. Extensive experiments on challenging mathematics benchmarks demonstrate Scaf-GRPO's effectiveness, boosting the pass@1 score of the Qwen2.5-Math-7B model on the AIME24 benchmark by a relative 44.3% over a vanilla GRPO baseline. This result demonstrates our framework provides a robust and effective methodology for unlocking a model's ability to solve problems previously beyond its reach, a critical step towards extending the frontier of autonomous reasoning in LLM.