π€ AI Summary
Large reasoning models often suffer from inefficiency and even degraded accuracy on complex tasks due to the generation of excessively long and unproductive chains of thought. This work reveals, for the first time, that such models inherently possess a latent capability to βstop reasoning at the right time,β and introduces the Self-Aware Guided Exploration (SAGE) paradigm, which explicitly activates this ability through a hybrid sampling strategy. Building upon SAGE, the authors further develop the SAGE-RL framework by integrating population-based reinforcement learning. Experimental results demonstrate that the proposed approach significantly improves both reasoning accuracy and efficiency across multiple mathematical reasoning benchmarks, while successfully embedding this efficient reasoning mechanism into the standard pass@1 evaluation pipeline.
π Abstract
Recent advancements in large reasoning models (LRMs) have greatly improved their capabilities on complex reasoning tasks through Long Chains of Thought (CoTs). However, this approach often results in substantial redundancy, impairing computational efficiency and causing significant delays in real-time applications. Recent studies show that longer reasoning chains are frequently uncorrelated with correctness and can even be detrimental to accuracy. In a further in-depth analysis of this phenomenon, we surprisingly uncover and empirically verify that LRMs implicitly know the appropriate time to stop thinking, while this capability is obscured by current sampling paradigms. Motivated by this, we introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that unleashes this efficient reasoning potential. Furthermore, integrating SAGE as mixed sampling into group-based reinforcement learning (SAGE-RL) enables SAGE-RL to effectively incorporate SAGE-discovered efficient reasoning patterns into standard pass@1 inference, markedly enhancing both the reasoning accuracy and efficiency of LRMs across multiple challenging mathematical benchmarks.