Reliable Control-Point Selection for Steering Reasoning in Large Language Models

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and unreliability of control vectors derived from keyword-based boundary detection in large language models, which erroneously treats all detected boundaries as valid signals. The study formalizes intrinsic reasoning behavior as a context-dependent stochastic process, revealing how unstable boundaries dilute control signals. To mitigate this, the authors introduce a stability-aware filtering mechanism coupled with a content subspace projection denoising method to extract reliable signals from hidden states, enabling the construction of training-free reasoning control vectors. Evaluated on MATH-500, the approach achieves 78.4% accuracy—outperforming the strongest baseline by 5.0%—and demonstrates successful transferability to architecturally similar models Nemotron and DeepScaleR, yielding improvements of 5.0% and 6.0%, respectively.
📝 Abstract
Steering vectors offer a training-free mechanism for controlling reasoning behaviors in large language models, but constructing effective vectors requires identifying genuine behavioral signals in the model's hidden states. For behaviors that can be toggled via prompts, this is straightforward. However, many reasoning behaviors -- such as self-reflection -- emerge spontaneously and resist prompt-level control. Current methods detect these behaviors through keyword matching in chain-of-thought traces, implicitly assuming that every detected boundary encodes a genuine behavioral signal. We show that this assumption is overwhelmingly wrong: across 541 keyword-detected boundaries, 93.3\% are behaviorally unstable, failing to reproduce the detected behavior under re-generation from the same prefix. We develop a probabilistic model that formalizes intrinsic reasoning behaviors as stochastic events with context-dependent trigger probabilities, and show that unstable boundaries dilute the steering signal. Guided by this analysis, we propose stability filtering, which retains only boundaries where the model consistently reproduces the target behavior. Combined with a content-subspace projection that removes residual question-specific noise, our method achieves 0.784 accuracy on MATH-500 (+5.0 over the strongest baseline). The resulting steering vectors transfer across models in the same architecture family without re-extraction, improving Nemotron-Research-Reasoning-1.5B (+5.0) and DeepScaleR-1.5B-Preview (+6.0). Code is available at https://github.com/zhmzm/stability-steering.
Problem

Research questions and friction points this paper is trying to address.

steering vectors
reasoning behaviors
behavioral stability
control-point selection
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

steering vectors
stability filtering
reasoning control
behavioral consistency
content-subspace projection
🔎 Similar Papers
No similar papers found.