When Can Large Reasoning Models Save Thinking? Mechanistic Analysis of Behavioral Divergence in Reasoning

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) suffer from “overthinking,” degrading inference efficiency. This work addresses the problem via reinforcement learning (RL) training coupled with multi-dimensional behavioral analysis—including attention visualization, termination confidence modeling, clustering, and causal attribution—to systematically characterize LRM reasoning behaviors under “think less” prompting. We identify, for the first time, three distinct reasoning modalities: no-thinking, explicit thinking, and implicit thinking. Results show that implicit thinking achieves high accuracy while substantially compressing response length; no-thinking shortens outputs but severely harms accuracy; and a fundamental trade-off exists among accuracy, response length, and termination confidence across modalities. Crucially, we uncover modality-specific divergence in the RL-trained thinking-termination mechanism—revealing intrinsic inconsistencies in current optimization objectives. This work establishes a novel paradigm for efficient, trustworthy, and interpretable reasoning control in LRMs.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) have significantly advanced performance on complex tasks, yet their tendency to overthink introduces inefficiencies. This study investigates the internal mechanisms of reinforcement learning (RL)-trained LRMs when prompted to save thinking, revealing three distinct thinking modes: no thinking (NT), explicit thinking (ET), and implicit thinking (IT). Through comprehensive analysis of confidence in thinking termination, attention from thinking to generation, and attentional focus on input sections, we uncover key factors influencing the reasoning behaviors. We further find that NT reduces output length at the cost of accuracy, while ET and IT maintain accuracy with reduced response length. Our findings expose fundamental inconsistencies in RL-optimized LRMs, necessitating adaptive improvements for reliable efficiency.
Problem

Research questions and friction points this paper is trying to address.

Investigates inefficiencies in large reasoning models due to overthinking
Analyzes three thinking modes: no, explicit, and implicit thinking
Examines accuracy and output length trade-offs in reasoning behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning-trained large reasoning models
Three thinking modes: NT, ET, IT
Analyzes confidence, attention, and focus factors
🔎 Similar Papers
No similar papers found.
R
Rongzhi Zhu
State Key Laboratory for Novel Software Technology, Nanjing University, China
Y
Yi Liu
State Key Laboratory for Novel Software Technology, Nanjing University, China
Zequn Sun
Zequn Sun
Nanjing University
Knowledge GraphLarge Language Model
Y
Yiwei Wang
University of California, Merced, USA
W
Wei Hu
State Key Laboratory for Novel Software Technology, Nanjing University, China; National Institute of Healthcare Data Science, Nanjing University, China