🤖 AI Summary
To address the challenge of localizing intrinsic refusal behavior in large language models (LLMs), this paper proposes an activation-space-driven framework for identifying refusal directions—without relying on output text, predefined templates, or human-specified hypotheses. Methodologically, it models inter-layer neuron activation similarity via cosine distance to unsupervisedly discover cross-layer transferable refusal-related directions, enabling steering fully decoupled from token-level outputs. Key contributions include: (i) the first output-agnostic automatic discovery of refusal directions; (ii) robustness across strongly/weakly aligned and adversarially fine-tuned models; (iii) state-of-the-art controllable steering performance with significantly reduced false refusal rates; and (iv) empirical validation of generalizability and enhanced safety across diverse alignment paradigms, including RLHF, DPO, and constitutional AI.
📝 Abstract
Large Language Models (LLMs) encode behaviors such as refusal within their activation space, yet identifying these behaviors remains a significant challenge. Existing methods often rely on predefined refusal templates detectable in output tokens or require manual analysis. We introduce extbf{COSMIC} (Cosine Similarity Metrics for Inversion of Concepts), an automated framework for direction selection that identifies viable steering directions and target layers using cosine similarity - entirely independent of model outputs. COSMIC achieves steering performance comparable to prior methods without requiring assumptions about a model's refusal behavior, such as the presence of specific refusal tokens. It reliably identifies refusal directions in adversarial settings and weakly aligned models, and is capable of steering such models toward safer behavior with minimal increase in false refusals, demonstrating robustness across a wide range of alignment conditions.