GRPO-LEAD: A Difficulty-Aware Reinforcement Learning Approach for Concise Mathematical Reasoning in Language Models

📅 2025-04-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address sparse rewards, excessively verbose solutions, and insufficient learning on hard instances in mathematical reasoning, this paper proposes GRPO-LEAD, a difficulty-aware reinforcement learning framework. Methodologically: (1) it introduces a novel length-dependent accuracy reward that jointly optimizes correctness and solution conciseness; (2) it incorporates explicit penalties for incorrect answers to sharpen decision boundaries; and (3) it designs a difficulty-weighted advantage re-scaling mechanism to amplify learning signals for complex problems. Built upon Group Relative Policy Optimization (GRPO), GRPO-LEAD further integrates supervised fine-tuning (SFT) analysis and models the impact of model scale. Experiments across multiple mathematical reasoning benchmarks demonstrate significant improvements in accuracy, solution brevity, and robustness. GRPO-LEAD effectively alleviates two critical limitations of standard GRPO—reward sparsity and poor convergence on challenging examples—thereby advancing the state of RL-based mathematical reasoning.

Technology Category

Application Category

📝 Abstract
Recent advances in R1-like reasoning models leveraging Group Relative Policy Optimization (GRPO) have significantly improved the performance of language models on mathematical reasoning tasks. However, current GRPO implementations encounter critical challenges, including reward sparsity due to binary accuracy metrics, limited incentives for conciseness, and insufficient focus on complex reasoning tasks. To address these issues, we propose GRPO-LEAD, a suite of novel enhancements tailored for mathematical reasoning. Specifically, GRPO-LEAD introduces (1) a length-dependent accuracy reward to encourage concise and precise solutions, (2) an explicit penalty mechanism for incorrect answers to sharpen decision boundaries, and (3) a difficulty-aware advantage reweighting strategy that amplifies learning signals for challenging problems. Furthermore, we systematically examine the impact of model scale and supervised fine-tuning (SFT) strategies, demonstrating that larger-scale base models and carefully curated datasets significantly enhance reinforcement learning effectiveness. Extensive empirical evaluations and ablation studies confirm that GRPO-LEAD substantially mitigates previous shortcomings, resulting in language models that produce more concise, accurate, and robust reasoning across diverse mathematical tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward sparsity in GRPO with binary accuracy metrics
Encourages concise solutions via length-dependent accuracy rewards
Enhances learning signals for complex reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Length-dependent accuracy reward for concise solutions
Explicit penalty mechanism for incorrect answers
Difficulty-aware advantage reweighting for challenging problems
🔎 Similar Papers
No similar papers found.
Jixiao Zhang
Jixiao Zhang
Johns Hopkins University
Large Language Models
C
Chunsheng Zuo
Johns Hopkins University