APR: Penalizing Structural Redundancy in Large Reasoning Models via Anchor-based Process Rewards

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency in large reasoning models during test-time scaling, where excessive “overthinking” leads to redundant self-verification and wasted computation. The authors formally define the “Answer Stability Tail” (AST) phenomenon and introduce the concept of a “reasoning anchor”—the earliest position at which the model’s answer stabilizes. Building on this, they propose an anchor-aware process reward mechanism that penalizes redundant reasoning steps beyond the anchor and integrate it with a length-sensitive policy optimization algorithm for reinforcement learning. Experiments on 1.5B and 7B models across five mathematical reasoning benchmarks demonstrate that the method significantly improves inference efficiency without compromising performance, achieving a Pareto-optimal trade-off between accuracy and computational cost while substantially reducing training overhead.

Technology Category

Application Category

📝 Abstract
Test-Time Scaling (TTS) has significantly enhanced the capabilities of Large Reasoning Models (LRMs) but introduces a critical side-effect known as Overthinking. We conduct a preliminary study to rethink this phenomenon from a fine-grained perspective. We observe that LRMs frequently conduct repetitive self-verification without revision even after obtaining the final answer during the reasoning process. We formally define this specific position where the answer first stabilizes as the Reasoning Anchor. By analyzing pre- and post-anchor reasoning behaviors, we uncover the structural redundancy fixed in LRMs: the meaningless repetitive verification after deriving the first complete answer, which we term the Answer-Stable Tail (AST). Motivated by this observation, we propose Anchor-based Process Reward (APR), a structure-aware reward shaping method that localizes the reasoning anchor and penalizes exclusively the post-anchor AST. Leveraging the policy optimization algorithm suitable for length penalties, our APR models achieved the performance-efficiency Pareto frontier at 1.5B and 7B scales averaged across five mathematical reasoning datasets while requiring substantially fewer computational resources for RL training.
Problem

Research questions and friction points this paper is trying to address.

Overthinking
Structural Redundancy
Large Reasoning Models
Answer-Stable Tail
Test-Time Scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anchor-based Process Reward
Reasoning Anchor
Answer-Stable Tail
Structural Redundancy
Test-Time Scaling
🔎 Similar Papers
No similar papers found.