🤖 AI Summary
To address the high computational cost and low efficiency caused by fixed-length token budgets in large language model (LLM) inference, this work proposes a dynamic length-control curriculum learning framework: it allocates generous token budgets early in training to facilitate exploratory long-chain reasoning, then progressively tightens the budget to guide the model toward concise, efficient compressed reasoning paths. Methodologically, we extend Group Relative Policy Optimization (GRPO) with a multi-signal reward function integrating correctness verification, length efficiency, and output format compliance, coupled with a decaying token budget scheduling strategy. This is the first systematic application of the “explore→compress” paradigm to length-constrained reasoning training. On mathematical reasoning benchmarks—GSM8K and MATH500—our approach achieves significant improvements under identical inference cost: +3.2% accuracy and +41% token efficiency, outperforming all fixed-budget baselines.
📝 Abstract
Recent work on enhancing the reasoning abilities of large language models (LLMs) has introduced explicit length control as a means of constraining computational cost while preserving accuracy. However, existing approaches rely on fixed-length training budgets, which do not take advantage of the natural progression from exploration to compression during learning. In this work, we propose a curriculum learning strategy for length-controlled reasoning using Group Relative Policy Optimization (GRPO). Our method starts with generous token budgets and gradually tightens them over training, encouraging models to first discover effective solution strategies and then distill them into more concise reasoning traces. We augment GRPO with a reward function that balances three signals: task correctness (via verifier feedback), length efficiency, and formatting adherence (via structural tags). Experiments on GSM8K, MATH500, SVAMP, College Math, and GSM+ demonstrate that curriculum-based training consistently outperforms fixed-budget baselines at the same final budget, achieving higher accuracy and significantly improved token efficiency. We further ablate the impact of reward weighting and decay schedule design, showing that progressive constraint serves as a powerful inductive bias for training efficient reasoning models. Our code and checkpoints are released at: https://github.com/hammoudhasan/curriculum_grpo.