🤖 AI Summary
This work addresses the performance degradation and inefficiency of large reasoning language models in complex tasks, often caused by redundant reasoning steps. To mitigate this issue, the authors propose a lightweight early-exit mechanism that dynamically monitors and terminates excessive deliberation through a novel path deviation index—a metric derived from high-entropy transition words. This approach is tightly integrated into the native reasoning process, requiring no additional training or auxiliary models, thereby avoiding extra computational overhead. Evaluated across multiple benchmarks, the method significantly outperforms existing early-exit strategies, simultaneously enhancing both reasoning efficiency and overall model performance.
📝 Abstract
Large Reasoning Language Models (LRLMs) demonstrate impressive capabilities on complex tasks by utilizing long Chain-of-Thought reasoning. However, they are prone to overthinking, which generates redundant reasoning steps that degrade both performance and efficiency. Recently, early-exit strategies are proposed to mitigate overthinking by dynamically and adaptively terminating redundant reasoning. However, current early-exit methods either introduce extra training overhead by relying on proxy models or limit inference throughput due to the frequent content switching between reasoning and generating probing answers. Moreover, most early-exit methods harm LRLMs performance due to over-truncation. Our insight stems from an observation: overthinking often causes LRLMs to deviate from the correct reasoning path, which is frequently accompanied by high-entropy transition tokens. Given this, we propose an early-exit method deeply coupled with the native reasoning process, which leverages the path deviation index as a dedicated monitoring metric for the frequent occurrence of high-entropy transition tokens to dynamically detect and terminate overthinking trajectories. We conduct experiments across multiple benchmarks using LRLMs of different types and scales, and the results indicate that our method delivers the largest performance improvement over vanilla CoT compared to existing early-exit methods.