Mitigating Overthinking in Large Reasoning Models via Manifold Steering

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) incur substantial computational overhead on mathematical and coding tasks due to “overthinking”—redundant verification loops and repetitive reasoning steps. This work, grounded in mechanistic interpretability, is the first to identify that overthinking arises from low-dimensional manifold structures in activation space. We propose the **Manifold-Oriented Intervention (MOI)** paradigm: projecting high-dimensional directional interventions onto the intrinsic low-dimensional activation manifold and theoretically modeling interference noise for precise, geometry-aware control. Empirical evaluation on a DeepSeek-R1 distilled model demonstrates that MOI reduces output token count by up to 71% on mathematical benchmarks while maintaining or improving accuracy. Moreover, MOI exhibits strong cross-domain generalization: it consistently achieves token compression without performance degradation on code generation and knowledge-intensive question answering tasks.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in solving complex tasks such as mathematics and coding. However, these models frequently exhibit a phenomenon known as overthinking during inference, characterized by excessive validation loops and redundant deliberation, leading to substantial computational overheads. In this paper, we aim to mitigate overthinking by investigating the underlying mechanisms from the perspective of mechanistic interpretability. We first showcase that the tendency of overthinking can be effectively captured by a single direction in the model's activation space and the issue can be eased by intervening the activations along this direction. However, this efficacy soon reaches a plateau and even deteriorates as the intervention strength increases. We therefore systematically explore the activation space and find that the overthinking phenomenon is actually tied to a low-dimensional manifold, which indicates that the limited effect stems from the noises introduced by the high-dimensional steering direction. Based on this insight, we propose Manifold Steering, a novel approach that elegantly projects the steering direction onto the low-dimensional activation manifold given the theoretical approximation of the interference noise. Extensive experiments on DeepSeek-R1 distilled models validate that our method reduces output tokens by up to 71% while maintaining and even improving the accuracy on several mathematical benchmarks. Our method also exhibits robust cross-domain transferability, delivering consistent token reduction performance in code generation and knowledge-based QA tasks. Code is available at: https://github.com/Aries-iai/Manifold_Steering.
Problem

Research questions and friction points this paper is trying to address.

Mitigate overthinking in Large Reasoning Models (LRMs).
Reduce computational overhead from redundant deliberation.
Improve model accuracy via low-dimensional manifold steering.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Steering activations along a single direction
Projecting onto low-dimensional activation manifold
Reducing tokens while maintaining accuracy