🤖 AI Summary
This work addresses the challenge that large language models often follow inefficient or erroneous reasoning paths due to uncontrolled autonomous strategy selection, lacking fine-grained steering over their reasoning processes. The authors introduce sparse autoencoders (SAEs) to disentangle hidden states and construct an interpretable feature space, proposing a two-stage SAE-Steering framework: first identifying sparse features associated with specific reasoning strategies, then precisely modulating model behavior through vector-based interventions. This approach overcomes the limitations of existing methods in exerting fine-grained control over reasoning dynamics, achieving a greater than 15% improvement in steering effectiveness and enabling reliable correction of erroneous reasoning trajectories, which yields a 7% absolute gain in task accuracy.
📝 Abstract
Large Reasoning Models (LRMs) exhibit human-like cognitive reasoning strategies (e.g. backtracking, cross-verification) during reasoning process, which improves their performance on complex tasks. Currently, reasoning strategies are autonomously selected by LRMs themselves. However, such autonomous selection often produces inefficient or even erroneous reasoning paths. To make reasoning more reliable and flexible, it is important to develop methods for controlling reasoning strategies. Existing methods struggle to control fine-grained reasoning strategies due to conceptual entanglement in LRMs'hidden states. To address this, we leverage Sparse Autoencoders (SAEs) to decompose strategy-entangled hidden states into a disentangled feature space. To identify the few strategy-specific features from the vast pool of SAE features, we propose SAE-Steering, an efficient two-stage feature identification pipeline. SAE-Steering first recalls features that amplify the logits of strategy-specific keywords, filtering out over 99\% of features, and then ranks the remaining features by their control effectiveness. Using the identified strategy-specific features as control vectors, SAE-Steering outperforms existing methods by over 15\% in control effectiveness. Furthermore, controlling reasoning strategies can redirect LRMs from erroneous paths to correct ones, achieving a 7\% absolute accuracy improvement.