🤖 AI Summary
Existing activation steering methods lack input awareness, hindering fine-grained, selective response control. This paper proposes Conditional Activation Steering (CAST), the first approach to enable input-semantic-category–conditioned activation intervention: by analyzing latent-state activation patterns during LLM inference, CAST dynamically triggers refusal responses for specific risk categories (e.g., hate speech, adult content) without fine-tuning or modifying model weights. CAST integrates latent-state pattern recognition, conditional triggering, and targeted activation-space offsets, enabling rule-driven zero-shot behavioral programming. Evaluated across multiple safety-critical and domain-specific refusal tasks, CAST achieves >92% recall while preserving response quality for non-target inputs (BLEU degradation <0.5), thus balancing safety and general-purpose utility.
📝 Abstract
LLMs have shown remarkable capabilities, but precisely controlling their response behavior remains challenging. Existing activation steering methods alter LLM behavior indiscriminately, limiting their practical applicability in settings where selective responses are essential, such as content moderation or domain-specific assistants. In this paper, we propose Conditional Activation Steering (CAST), which analyzes LLM activation patterns during inference to selectively apply or withhold activation steering based on the input context. Our method is based on the observation that different categories of prompts activate distinct patterns in the model's hidden states. Using CAST, one can systematically control LLM behavior with rules like"if input is about hate speech or adult content, then refuse"or"if input is not about legal advice, then refuse."This allows for selective modification of responses to specific content while maintaining normal responses to other content, all without requiring weight optimization. We release an open-source implementation of our framework at.