Decomposing Elements of Problem Solving: What"Math"Does RL Teach?

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior evaluations of LLMs’ mathematical reasoning rely solely on aggregate accuracy, failing to characterize fine-grained disparities across planning, execution, and verification capabilities—and thus unable to explain how RL methods (e.g., GRPO) improve performance. Method: We propose the “Three Elements of Problem Solving” framework, decomposing mathematical reasoning into attributable planning, execution, and verification components; identify and name the “temperature distillation” phenomenon; and design an interpretable synthetic solution-tree navigation task. Results: Quantitative analysis reveals GRPO markedly enhances execution robustness but not planning capability, leading to limited generalization bounded by a “coverage wall.” Further, we demonstrate that incorporating structured exploration mechanisms breaks this bottleneck, significantly improving generalization to novel solution paths.

Technology Category

Application Category

📝 Abstract
Mathematical reasoning tasks have become prominent benchmarks for assessing the reasoning capabilities of LLMs, especially with reinforcement learning (RL) methods such as GRPO showing significant performance gains. However, accuracy metrics alone do not support fine-grained assessment of capabilities and fail to reveal which problem-solving skills have been internalized. To better understand these capabilities, we propose to decompose problem solving into fundamental capabilities: Plan (mapping questions to sequences of steps), Execute (correctly performing solution steps), and Verify (identifying the correctness of a solution). Empirically, we find that GRPO mainly enhances the execution skill-improving execution robustness on problems the model already knows how to solve-a phenomenon we call temperature distillation. More importantly, we show that RL-trained models struggle with fundamentally new problems, hitting a 'coverage wall' due to insufficient planning skills. To explore RL's impact more deeply, we construct a minimal, synthetic solution-tree navigation task as an analogy for mathematical problem-solving. This controlled setup replicates our empirical findings, confirming RL primarily boosts execution robustness. Importantly, in this setting, we identify conditions under which RL can potentially overcome the coverage wall through improved exploration and generalization to new solution paths. Our findings provide insights into the role of RL in enhancing LLM reasoning, expose key limitations, and suggest a path toward overcoming these barriers. Code is available at https://github.com/cfpark00/RL-Wall.
Problem

Research questions and friction points this paper is trying to address.

Assessing RL's impact on LLM reasoning capabilities
Decomposing problem-solving into Plan, Execute, Verify skills
Identifying RL's limitations in handling novel problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposes problem solving into Plan, Execute, Verify
Identifies RL's execution robustness via temperature distillation
Explores RL overcoming coverage wall via exploration