Understanding and Mitigating Dataset Corruption in LLM Steering

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the robustness of contrastive large language model (LLM) steering under training data corrupted by noise or malicious attacks. We identify mean estimation in high-dimensional activation spaces as a critical vulnerability: while moderate contamination has limited impact, strong adversarial corruption can induce harmful steering biases. To address this, we propose a robust statistical approach to mean estimation that effectively mitigates the adverse effects of data contamination. Our work is the first to reveal the sensitivity mechanism of contrastive steering to data pollution and demonstrates—through geometric analysis and adversarial modeling—that the proposed method substantially improves robustness against such corruptions.

Technology Category

Application Category

📝 Abstract
Contrastive steering has been shown as a simple and effective method to adjust the generative behavior of LLMs at inference time. It uses examples of prompt responses with and without a trait to identify a direction in an intermediate activation layer, and then shifts activations in this 1-dimensional subspace. However, despite its growing use in AI safety applications, the robustness of contrastive steering to noisy or adversarial data corruption is poorly understood. We initiate a study of the robustness of this process with respect to corruption of the dataset of examples used to train the steering direction. Our first observation is that contrastive steering is quite robust to a moderate amount of corruption, but unwanted side effects can be clearly and maliciously manifested when a non-trivial fraction of the training data is altered. Second, we analyze the geometry of various types of corruption, and identify some safeguards. Notably, a key step in learning the steering direction involves high-dimensional mean computation, and we show that replacing this step with a recently developed robust mean estimator often mitigates most of the unwanted effects of malicious corruption.
Problem

Research questions and friction points this paper is trying to address.

dataset corruption
contrastive steering
robustness
LLM steering
data poisoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive steering
dataset corruption
robust mean estimation
LLM alignment
activation steering
🔎 Similar Papers
No similar papers found.
C
Cullen Anderson
Department of Computer Science, University of Massachusetts Amherst
Narmeen Oozeer
Narmeen Oozeer
Research Engineer, Martian Learning
mathematicsdeep learninginterpretability
F
Foad Namjoo
Kahlert School of Computing, University of Utah
R
Remy Ogasawara
Kahlert School of Computing, University of Utah
Amirali Abdullah
Amirali Abdullah
Thoughtworks
Neural ReasoningMech interpDeep LearningCS Theory
J
Jeff M. Phillips
Kahlert School of Computing, University of Utah