🤖 AI Summary
This work addresses the inefficiency and accuracy degradation in large reasoning models caused by excessive, redundant reasoning steps during complex tasks. The authors propose a low-overhead process-supervised reinforcement learning framework that, for the first time, leverages intrinsic special attention heads within the model to identify critical reasoning steps, enabling fine-grained, step-level credit assignment. Through an attention-guided dual-policy optimization strategy, the method simultaneously suppresses redundant steps while preserving essential ones, circumventing the coarse-grained penalties and high computational costs of conventional approaches. Evaluated across nine benchmark datasets, the proposed method significantly shortens reasoning chains while substantially improving reasoning performance, effectively balancing both efficiency and accuracy.
📝 Abstract
Large reasoning models trained with reinforcement learning and verifiable rewards (RLVR) achieve strong performance on complex reasoning tasks, yet often overthink, generating redundant reasoning without performance gains. Existing trajectory-level length penalties often fail to effectively shorten reasoning length and degrade accuracy, as they uniformly treat all reasoning steps and lack fine-grained signals to distinguish redundancy from necessity. Meanwhile, process-supervised methods are typically resource-intensive and suffer from inaccurate credit assignment. To address these issues, we propose ATTNPO, a low-overhead process-supervised RL framework that leverages the model's intrinsic attention signals for step-level credit assignment. We first identify a set of special attention heads that naturally focus on essential steps while suppressing redundant ones. By leveraging the attention scores of these heads, We then employ two sub-strategies to mitigate overthinking by discouraging redundant steps while preserving accuracy by reducing penalties on essential steps. Experimental results show that ATTNPO substantially reduces reasoning length while significantly improving performance across 9 benchmarks.