Beyond Token Length: Step Pruner for Efficient and Accurate Reasoning in Large Language Models

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the pervasive “overthinking” problem in Large Reasoning Models (LRMs)—characterized by unnecessarily lengthy reasoning chains that impair efficiency and exacerbate speculative pruning—this paper proposes a step-level pruning reinforcement learning framework. Unlike conventional token-count-based sparsity penalties, our approach operates at the reasoning-step granularity: we design a step-aware reward function and a dynamic stopping mechanism to explicitly decouple correctness from step redundancy, and introduce an output-length-triggered update blocking technique to mitigate policy collapse. Evaluated on four major reasoning benchmarks, our method maintains or even improves accuracy while substantially compressing reasoning length—e.g., reducing token count by 69.7% on AIME24—achieving, for the first time, step-level controllable, robust, and efficient reasoning optimization.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) demonstrate strong performance on complex tasks but often suffer from excessive verbosity, known as "overthinking." Existing solutions via reinforcement learning (RL) typically penalize generated tokens to promote conciseness. However, these methods encounter two challenges: responses with fewer tokens do not always correspond to fewer reasoning steps, and models may develop hacking behavior in later stages of training by discarding reasoning steps to minimize token usage. In this work, we introduce extbf{Step Pruner (SP)}, an RL framework that steers LRMs toward more efficient reasoning by favoring compact reasoning steps. Our step-aware reward function prioritizes correctness while imposing penalties for redundant steps, and withholds rewards for incorrect responses to prevent the reinforcement of erroneous reasoning. Moreover, we propose a dynamic stopping mechanism: when the length of any output step exceeds the upper limit, we halt updates to prevent hacking behavior caused by merging steps. Extensive experiments across four reasoning benchmarks demonstrate that SP achieves state-of-the-art accuracy while significantly reducing response length. For instance, on AIME24, SP reduces token usage by extbf{69.7%}.
Problem

Research questions and friction points this paper is trying to address.

Reducing excessive verbosity and overthinking in Large Reasoning Models
Preventing token-based hacking behavior that sacrifices reasoning quality
Optimizing reasoning efficiency while maintaining high accuracy in responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Step Pruner RL framework promotes compact reasoning steps
Step-aware reward penalizes redundancy and withholds for errors
Dynamic stopping prevents step merging and hacking behavior
🔎 Similar Papers
No similar papers found.
C
Canhui Wu
Xi’an Jiaotong University, JD Future Academy
Qiong Cao
Qiong Cao
JD Exploration Academy, JD.com
Computer Vision3D Human-centric VisionMachine Learning
C
Chang Li
JD Future Academy
Z
Zhenfang Wang
JD Future Academy
Chao Xue
Chao Xue
Beihang University
Natural Language ProcessingLarge Language Model
Y
Yuwei Fan
Xi’an Jiaotong University
W
Wei Xi
Xi’an Jiaotong University
X
Xiaodong He
JD Future Academy