🤖 AI Summary
This work addresses the challenge of inefficient deployment of large reasoning models in resource-constrained settings, where excessive or insufficient reasoning often leads to computational redundancy or degraded accuracy. To this end, the authors propose ReBalance, a novel framework that introduces a continuous confidence metric to dynamically identify the model’s reasoning state in real time. Leveraging reasoning pattern prototypes derived from small-scale data, ReBalance generates guidance vectors that adaptively steer the model’s inference trajectory. The method is training-free and plug-and-play, employing hidden state aggregation, confidence variance analysis, and a dynamic control function to modulate the strength and direction of guidance vectors. Evaluated across four models (0.5B–32B) and nine benchmarks spanning mathematical reasoning, question answering, and code generation, ReBalance consistently reduces output redundancy while improving accuracy, demonstrating strong generalizability and practical utility.
📝 Abstract
Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning mode prototypes, we compute a steering vector to guide LRMs' reasoning trajectories. A dynamic control function modulates this vector's strength and direction based on real-time confidence, pruning redundancy during overthinking, and promoting exploration during underthinking. Extensive experiments conducted on four models ranging from 0.5B to 32B, and across nine benchmarks in math reasoning, general question answering, and coding tasks demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment. Code is available at https://github.com/yu-lin-li/ReBalance .