Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement Learning

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how reinforcement learning (RL) enhances the mathematical reasoning capabilities of large language models (LLMs), focusing on three core dimensions: plan execution, problem decomposition, and knowledge utilization. Method: We propose a fine-grained analytical framework that moves beyond conventional accuracy-centric evaluation. Leveraging RL paradigms such as GRPO, we introduce controllable reasoning trajectory injection, hierarchical error attribution, and difficulty-aware training. Contribution/Results: Contrary to prevailing assumptions, we find—empirically for the first time—that RL does not primarily improve adherence to externally provided plans; rather, it significantly strengthens the model’s capacity to autonomously construct and execute internal reasoning strategies, while enabling dynamic integration across heterogeneous knowledge sources. Experiments demonstrate that RL-finetuned models exhibit enhanced robustness on high-difficulty benchmarks, reduced performance degradation under external plan intervention, and improved cross-task consistency attributable to superior knowledge fusion.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has become the dominant paradigm for endowing language models with advanced reasoning capabilities. Despite the substantial empirical gains demonstrated by RL-based training methods like GRPO, a granular understanding of their advantages is still lacking. To address this gap, we introduce a fine-grained analytic framework to dissect the impact of RL on reasoning. Our framework specifically investigates key elements that have been hypothesized to benefit from RL training: (1) plan-following and execution, (2) problem decomposition, and (3) improved reasoning and knowledge utilization. Using this framework, we gain insights beyond mere accuracy. For instance, providing models with explicit step-by-step plans surprisingly degrades performance on the most challenging benchmarks, yet RL-tuned models exhibit greater robustness, experiencing markedly smaller performance drops than their base counterparts. This suggests that RL may not primarily enhance the execution of external plans but rather empower models to formulate and follow internal strategies better suited to their reasoning processes. Conversely, we observe that RL enhances the model's capacity to integrate provided knowledge into its reasoning process, leading to performance improvements across diverse tasks. We also study difficulty, showing improved training by developing new ways to exploit hard problems. Our findings lay a foundation for more principled training and evaluation of reasoning models.
Problem

Research questions and friction points this paper is trying to address.

Analyzing RL's impact on LLM reasoning beyond accuracy metrics
Investigating RL benefits in plan-following and problem decomposition
Enhancing knowledge integration and robustness in reasoning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained analytic framework for RL impact
Investigates plan-following, problem decomposition, knowledge utilization
RL enhances internal strategy formulation and knowledge integration
🔎 Similar Papers
No similar papers found.