🤖 AI Summary
This work reveals and formally names the phenomenon of “steering externalities,” wherein benign activation steering—despite enhancing the practical utility of large language models—unexpectedly undermines their safety alignment, substantially increasing jailbreak risk. The study demonstrates that steering vectors derived exclusively from compliant, benign data (e.g., JSON-formatted outputs) systematically erode a model’s safety boundaries during inference. By injecting such activation steering vectors into internal representations and evaluating against black-box jailbreak attack benchmarks, the authors show that this intervention can elevate jailbreak success rates beyond 80%. These findings expose a critical blind spot in current alignment mechanisms and offer a novel perspective for security auditing in deployed models.
📝 Abstract
Activation steering is a practical post-training model alignment technique to enhance the utility of Large Language Models (LLMs). Prior to deploying a model as a service, developers can steer a pre-trained model toward specific behavioral objectives, such as compliance or instruction adherence, without the need for retraining. This process is as simple as adding a steering vector to the model's internal representations. However, this capability unintentionally introduces critical and under-explored safety risks. We identify a phenomenon termed Steering Externalities, where steering vectors derived from entirely benign datasets-such as those enforcing strict compliance or specific output formats like JSON-inadvertently erode safety guardrails. Experiments reveal that these interventions act as a force multiplier, creating new vulnerabilities to jailbreaks and increasing attack success rates to over 80% on standard benchmarks by bypassing the initial safety alignment. Ultimately, our results expose a critical blind spot in deployment: benign activation steering systematically erodes the"safety margin,"rendering models more vulnerable to black-box attacks and proving that inference-time utility improvements must be rigorously audited for unintended safety externalities.