🤖 AI Summary
This study systematically investigates the robustness of contrastive large language model (LLM) steering under training data corrupted by noise or malicious attacks. We identify mean estimation in high-dimensional activation spaces as a critical vulnerability: while moderate contamination has limited impact, strong adversarial corruption can induce harmful steering biases. To address this, we propose a robust statistical approach to mean estimation that effectively mitigates the adverse effects of data contamination. Our work is the first to reveal the sensitivity mechanism of contrastive steering to data pollution and demonstrates—through geometric analysis and adversarial modeling—that the proposed method substantially improves robustness against such corruptions.
📝 Abstract
Contrastive steering has been shown as a simple and effective method to adjust the generative behavior of LLMs at inference time. It uses examples of prompt responses with and without a trait to identify a direction in an intermediate activation layer, and then shifts activations in this 1-dimensional subspace. However, despite its growing use in AI safety applications, the robustness of contrastive steering to noisy or adversarial data corruption is poorly understood. We initiate a study of the robustness of this process with respect to corruption of the dataset of examples used to train the steering direction. Our first observation is that contrastive steering is quite robust to a moderate amount of corruption, but unwanted side effects can be clearly and maliciously manifested when a non-trivial fraction of the training data is altered. Second, we analyze the geometry of various types of corruption, and identify some safeguards. Notably, a key step in learning the steering direction involves high-dimensional mean computation, and we show that replacing this step with a recently developed robust mean estimator often mitigates most of the unwanted effects of malicious corruption.