Mitigating Overthinking in Large Reasoning Language Models via Reasoning Path Deviation Monitoring

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and inefficiency of large reasoning language models in complex tasks, often caused by redundant reasoning steps. To mitigate this issue, the authors propose a lightweight early-exit mechanism that dynamically monitors and terminates excessive deliberation through a novel path deviation index—a metric derived from high-entropy transition words. This approach is tightly integrated into the native reasoning process, requiring no additional training or auxiliary models, thereby avoiding extra computational overhead. Evaluated across multiple benchmarks, the method significantly outperforms existing early-exit strategies, simultaneously enhancing both reasoning efficiency and overall model performance.

Technology Category

Application Category

📝 Abstract
Large Reasoning Language Models (LRLMs) demonstrate impressive capabilities on complex tasks by utilizing long Chain-of-Thought reasoning. However, they are prone to overthinking, which generates redundant reasoning steps that degrade both performance and efficiency. Recently, early-exit strategies are proposed to mitigate overthinking by dynamically and adaptively terminating redundant reasoning. However, current early-exit methods either introduce extra training overhead by relying on proxy models or limit inference throughput due to the frequent content switching between reasoning and generating probing answers. Moreover, most early-exit methods harm LRLMs performance due to over-truncation. Our insight stems from an observation: overthinking often causes LRLMs to deviate from the correct reasoning path, which is frequently accompanied by high-entropy transition tokens. Given this, we propose an early-exit method deeply coupled with the native reasoning process, which leverages the path deviation index as a dedicated monitoring metric for the frequent occurrence of high-entropy transition tokens to dynamically detect and terminate overthinking trajectories. We conduct experiments across multiple benchmarks using LRLMs of different types and scales, and the results indicate that our method delivers the largest performance improvement over vanilla CoT compared to existing early-exit methods.
Problem

Research questions and friction points this paper is trying to address.

overthinking
Large Reasoning Language Models
reasoning path deviation
early-exit
redundant reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning path deviation
early-exit
overthinking mitigation
high-entropy tokens
large reasoning language models
🔎 Similar Papers
No similar papers found.
W
Weixin Guan
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyberspace Security, University of Chinese Academy of Sciences, Beijing, China
Liang Li
Liang Li
Institue of Computing Technology, CAS
Computer VisionImage UnderstandingMultimedia Content Analysis
J
Jiapeng Liu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China; School of Cyberspace Security, University of Chinese Academy of Sciences, Beijing, China
Bing Li
Bing Li
Professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese
Video AnalysisColor ConstancyWeb MiningMultimedia
Peng Fu
Peng Fu
Institute of Information Engineering, Chinese Academy of Sciences
Natural Language Processing
C
Chengyang Fang
School of Computer and Artificial Intelligence, Jiangxi University of Finance and Economics, Jiangxi, China
Xiaoshuai Hao
Xiaoshuai Hao
Beijing Academy of Artificial Intelligence,BAAI
vision and language
Can Ma
Can Ma
Unknown affiliation
Weiping Wang
Weiping Wang
School of Information Science and Engineering, Central South University
Computer NetworkNetwork Security