🤖 AI Summary
This work addresses the challenge of coarse-grained safety alignment in current language models, which often leads to over-rejection of benign requests or under-rejection of harmful content. Focusing on Llama-3-8B, the study introduces category-specific refusal tokens and reveals, for the first time, that these tokens induce decoupled, category-aligned directions in the residual stream. Building on this insight, the authors propose a lightweight, training-free intervention: by constructing a unified, orthonormalized intervention vector via low-rank combinations in a whitened orthogonal basis, they enable precise, multi-category control over refusal behavior at inference time using probing techniques. This approach significantly reduces over-rejection on benign prompts while enhancing rejection of harmful queries, and demonstrates strong transferability across models sharing the same architecture.
📝 Abstract
Language models are commonly fine-tuned for safety alignment to refuse harmful prompts. One approach fine-tunes them to generate categorical refusal tokens that distinguish different refusal types before responding. In this work, we leverage a version of Llama 3 8B fine-tuned with these categorical refusal tokens to enable inference-time control over fine-grained refusal behavior, improving both safety and reliability. We show that refusal token fine-tuning induces separable, category-aligned directions in the residual stream, which we extract and use to construct categorical steering vectors with a lightweight probe that determines whether to steer toward or away from refusal during inference. In addition, we introduce a learned low-rank combination that mixes these category directions in a whitened, orthonormal steering basis, resulting in a single controllable intervention under activation-space anisotropy, and show that this intervention is transferable across same-architecture model variants without additional training. Across benchmarks, both categorical steering vectors and the low-rank combination consistently reduce over-refusals on benign prompts while increasing refusal rates on harmful prompts, highlighting their utility for multi-category refusal control.