Activation Steering for Aligned Open-ended Generation without Sacrificing Coherence

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models to alignment failures during generation, which can arise from prompting or fine-tuning and are inadequately mitigated by existing mechanisms that typically operate only on initial outputs. To ensure consistent alignment throughout generation, the authors propose two projection-aware activation steering methods—StTP and StMP—that apply lightweight runtime interventions in the activation space. These methods dynamically detect and correct misaligned activations using decision boundaries derived from logistic regression. Evaluated on Llama-3.3-70B-Instruct and Qwen3-32B, the approach significantly restores desirable attributes such as honesty and empathy while preserving textual coherence. It outperforms fixed-coefficient baselines on standard benchmarks including MMLU, MT-Bench, and AlpacaEval, and effectively alleviates repetition issues in multi-turn dialogues.
📝 Abstract
Alignment in LLMs is more brittle than commonly assumed: misalignment can be triggered by adversarial prompts, benign fine-tuning, emergent misalignment, and goal misgeneralization. Recent evidence suggests that some misalignment behaviors are encoded as linear structure in activation space, making it tractable via steering, while safety alignment has been shown to govern the first few output tokens primarily, leaving subsequent generation unguarded. These findings motivate activation steering as a lightweight runtime defense that continuously corrects misaligned activations throughout generation. We evaluate three methods: Steer-With-Fixed-Coeff (SwFC), which applies uniform additive steering, and two novel projection-aware methods, Steer-to-Target-Projection (StTP) and Steer-to-Mirror-Projection (StMP), that use a logistic regression decision boundary to selectively intervene only on tokens whose activations fall below distributional thresholds. Using malicious system prompts as a controlled proxy for misalignment, we evaluate under two threat models (dishonesty and dismissiveness) and two architectures (Llama-3.3-70B-Instruct, Qwen3-32B). All methods substantially recover target traits (honesty and compassion) while preserving coherence. StTP and StMP better maintain general capabilities (MMLU, MT-Bench, AlpacaEval) and produce less repetition in multi-turn conversations.
Problem

Research questions and friction points this paper is trying to address.

alignment
activation steering
misalignment
large language models
coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation steering
alignment
projection-aware intervention
runtime defense
misalignment correction
🔎 Similar Papers
No similar papers found.
N
Niklas Herbster
Tara Research
M
Martin Zborowski
Tara Research
A
Alberto Tosato
Tara Research
Gauthier Gidel
Gauthier Gidel
Associate professor at University of Montréal (DIRO), Core Member of Mila, Canada CIFAR AI Chair
Artificial IntelligenceMachine learningOptimizationGame TheoryNeural Network
T
Tommaso Tosato
Tara Research