🤖 AI Summary
Large reasoning language models often suffer from redundant reflection—triggered by phrases such as “Wait” or “Alternatively”—during long-chain reasoning, leading to overthinking, excessive token consumption, and degraded inference efficiency. To address this, we propose a confidence-guided reflection suppression method that introduces a dynamic confidence mechanism for real-time assessment of reasoning certainty during autoregressive generation; it selectively suppresses reflection triggers only when confidence falls below a threshold. Crucially, our approach requires no model fine-tuning or architectural modification, ensuring model-agnosticism and plug-and-play deployment. Evaluated across multiple reasoning benchmarks, the method consistently reduces token usage by 18.5%–41.9%, substantially lowering inference cost while strictly preserving accuracy.
📝 Abstract
Recent Large Reasoning Language Models (LRLMs) employ long chain-of-thought reasoning with complex reflection behaviors, typically signaled by specific trigger words (e.g., "Wait" and "Alternatively") to enhance performance. However, these reflection behaviors can lead to the overthinking problem where the generation of redundant reasoning steps that unnecessarily increase token usage, raise inference costs, and reduce practical utility. In this paper, we propose Certainty-Guided Reflection Suppression (CGRS), a novel method that mitigates overthinking in LRLMs while maintaining reasoning accuracy. CGRS operates by dynamically suppressing the model's generation of reflection triggers when it exhibits high confidence in its current response, thereby preventing redundant reflection cycles without compromising output quality. Our approach is model-agnostic, requires no retraining or architectural modifications, and can be integrated seamlessly with existing autoregressive generation pipelines. Extensive experiments across four reasoning benchmarks (i.e., AIME24, AMC23, MATH500, and GPQA-D) demonstrate CGRS's effectiveness: it reduces token usage by an average of 18.5% to 41.9% while preserving accuracy. It also achieves the optimal balance between length reduction and performance compared to state-of-the-art baselines. These results hold consistently across model architectures (e.g., DeepSeek-R1-Distill series, QwQ-32B, and Qwen3 family) and scales (4B to 32B parameters), highlighting CGRS's practical value for efficient reasoning.