🤖 AI Summary
This work addresses the vulnerability of large language models to alignment failures during generation, which can arise from prompting or fine-tuning and are inadequately mitigated by existing mechanisms that typically operate only on initial outputs. To ensure consistent alignment throughout generation, the authors propose two projection-aware activation steering methods—StTP and StMP—that apply lightweight runtime interventions in the activation space. These methods dynamically detect and correct misaligned activations using decision boundaries derived from logistic regression. Evaluated on Llama-3.3-70B-Instruct and Qwen3-32B, the approach significantly restores desirable attributes such as honesty and empathy while preserving textual coherence. It outperforms fixed-coefficient baselines on standard benchmarks including MMLU, MT-Bench, and AlpacaEval, and effectively alleviates repetition issues in multi-turn dialogues.
📝 Abstract
Alignment in LLMs is more brittle than commonly assumed: misalignment can be triggered by adversarial prompts, benign fine-tuning, emergent misalignment, and goal misgeneralization. Recent evidence suggests that some misalignment behaviors are encoded as linear structure in activation space, making it tractable via steering, while safety alignment has been shown to govern the first few output tokens primarily, leaving subsequent generation unguarded. These findings motivate activation steering as a lightweight runtime defense that continuously corrects misaligned activations throughout generation. We evaluate three methods: Steer-With-Fixed-Coeff (SwFC), which applies uniform additive steering, and two novel projection-aware methods, Steer-to-Target-Projection (StTP) and Steer-to-Mirror-Projection (StMP), that use a logistic regression decision boundary to selectively intervene only on tokens whose activations fall below distributional thresholds. Using malicious system prompts as a controlled proxy for misalignment, we evaluate under two threat models (dishonesty and dismissiveness) and two architectures (Llama-3.3-70B-Instruct, Qwen3-32B). All methods substantially recover target traits (honesty and compassion) while preserving coherence. StTP and StMP better maintain general capabilities (MMLU, MT-Bench, AlpacaEval) and produce less repetition in multi-turn conversations.