🤖 AI Summary
This work investigates the causes of unreliable behavior modulation via steering vectors in language models and proposes methods to enhance their robustness. We address the observed failure—and even reversal—of steering effects under certain prompts by analyzing the geometric structure of activation spaces. Specifically, we identify a strong correlation between directional disparities across prompt types and steering failure. We further introduce two quantitative predictors of steering efficacy: activation separability (the linear separability of positive/negative-direction activations) and cosine similarity between steering vectors and task-relevant directions. Through multi-template prompting experiments, differential activation analysis, and geometric visualizations, we empirically validate that high cosine similarity coupled with strong activation separability jointly improves steering reliability. Our contributions include: (i) a geometric explanation for steering vector instability; (ii) the first quantitative framework linking activation-space geometry to steering effectiveness; and (iii) an interpretable, predictive analytical foundation for controllable text generation.
📝 Abstract
Steering vectors are a lightweight method to control language model behavior by adding a learned bias to the activations at inference time. Although steering demonstrates promising performance, recent work shows that it can be unreliable or even counterproductive in some cases. This paper studies the influence of prompt types and the geometry of activation differences on steering reliability. First, we find that all seven prompt types used in our experiments produce a net positive steering effect, but exhibit high variance across samples, and often give an effect opposite of the desired one. No prompt type clearly outperforms the others, and yet the steering vectors resulting from the different prompt types often differ directionally (as measured by cosine similarity). Second, we show that higher cosine similarity between training set activation differences predicts more effective steering. Finally, we observe that datasets where positive and negative activations are better separated are more steerable. Our results suggest that vector steering is unreliable when the target behavior is not represented by a coherent direction.