🤖 AI Summary
To address sparse rewards, excessively verbose solutions, and insufficient learning on hard instances in mathematical reasoning, this paper proposes GRPO-LEAD, a difficulty-aware reinforcement learning framework. Methodologically: (1) it introduces a novel length-dependent accuracy reward that jointly optimizes correctness and solution conciseness; (2) it incorporates explicit penalties for incorrect answers to sharpen decision boundaries; and (3) it designs a difficulty-weighted advantage re-scaling mechanism to amplify learning signals for complex problems. Built upon Group Relative Policy Optimization (GRPO), GRPO-LEAD further integrates supervised fine-tuning (SFT) analysis and models the impact of model scale. Experiments across multiple mathematical reasoning benchmarks demonstrate significant improvements in accuracy, solution brevity, and robustness. GRPO-LEAD effectively alleviates two critical limitations of standard GRPO—reward sparsity and poor convergence on challenging examples—thereby advancing the state of RL-based mathematical reasoning.
📝 Abstract
Recent advances in R1-like reasoning models leveraging Group Relative Policy Optimization (GRPO) have significantly improved the performance of language models on mathematical reasoning tasks. However, current GRPO implementations encounter critical challenges, including reward sparsity due to binary accuracy metrics, limited incentives for conciseness, and insufficient focus on complex reasoning tasks. To address these issues, we propose GRPO-LEAD, a suite of novel enhancements tailored for mathematical reasoning. Specifically, GRPO-LEAD introduces (1) a length-dependent accuracy reward to encourage concise and precise solutions, (2) an explicit penalty mechanism for incorrect answers to sharpen decision boundaries, and (3) a difficulty-aware advantage reweighting strategy that amplifies learning signals for challenging problems. Furthermore, we systematically examine the impact of model scale and supervised fine-tuning (SFT) strategies, demonstrating that larger-scale base models and carefully curated datasets significantly enhance reinforcement learning effectiveness. Extensive empirical evaluations and ablation studies confirm that GRPO-LEAD substantially mitigates previous shortcomings, resulting in language models that produce more concise, accurate, and robust reasoning across diverse mathematical tasks.