When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) incur excessive computational overhead on simple tasks due to redundant explicit reasoning. This paper identifies and leverages an inherent “internal self-recovery capability” of LRMs, proposing an adaptive thinking-mode switching mechanism that suppresses redundant explicit reasoning and activates implicit recovery for on-demand allocation of reasoning resources. Methodologically, we introduce ASRR—an Adaptive Self-Recovering Reasoning framework—built upon reinforcement learning, incorporating token-level, accuracy-aware length reward modeling, soft switching between reasoning modes, and an implicit reasoning recovery mechanism. On multiple benchmarks, ASRR reduces inference budget by 32.5% (1.5B) and 25.7% (7B) over GRPO, with only marginal pass@1 degradation (1.2% and 0.6%, respectively), while improving safety-harmlessness rates by up to 21.7%. The approach significantly enhances the joint optimization of efficiency, accuracy, and safety.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) achieve remarkable performance via long reasoning chains, but often incur excessive computational overhead due to redundant reasoning, especially on simple tasks. In this work, we systematically quantify the upper bounds of LRMs under both Long-Thinking and No-Thinking modes, and uncover the phenomenon of"Internal Self-Recovery Mechanism"where models implicitly supplement reasoning during answer generation. Building on this insight, we propose Adaptive Self-Recovery Reasoning (ASRR), a framework that suppresses unnecessary reasoning and enables implicit recovery. By introducing accuracy-aware length reward regulation, ASRR adaptively allocates reasoning effort according to problem difficulty, achieving high efficiency with negligible performance sacrifice. Experiments across multiple benchmarks and models show that, compared with GRPO, ASRR reduces reasoning budget by up to 32.5% (1.5B) and 25.7% (7B) with minimal accuracy loss (1.2% and 0.6% pass@1), and significantly boosts harmless rates on safety benchmarks (up to +21.7%). Our results highlight the potential of ASRR for enabling efficient, adaptive, and safer reasoning in LRMs.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in large reasoning models
Adaptive allocation of reasoning effort by difficulty
Improving efficiency with minimal performance sacrifice
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Self-Recovery Reasoning (ASRR) framework
Accuracy-aware length reward regulation
Suppresses unnecessary reasoning, enables implicit recovery
🔎 Similar Papers
No similar papers found.
X
Xiaoyun Zhang
Meituan
J
Jingqing Ruan
Meituan
Xing Ma
Xing Ma
Meituan, NLP engineer
Dialog SystemLarge Language ModelConversation Analysis
Y
Yawen Zhu
Meituan
Haodong Zhao
Haodong Zhao
Shanghai Jiao Tong University
Federated LearningLLM
H
Hao Li
Meituan
J
Jiansong Chen
Meituan
K
Ke Zeng
Meituan
X
Xunliang Cai
Meituan