🤖 AI Summary
Large language models (LLMs) frequently generate toxic content (e.g., abusive or vulgar language), and existing detoxification methods are vulnerable to jailbreak attacks. Method: This paper proposes a causality-aware toxicity intervention framework based on sparse autoencoders (SAEs). It is the first to employ SAEs to identify neuron directions in the residual stream strongly correlated with toxicity, and introduces a three-level activation-guided mechanism for targeted intervention. The work further reveals that feature splitting harms safety interventions and underscores the necessity of disentangled feature learning. Results: Experiments on GPT-2 Small and Gemma-2-2B demonstrate up to 20% toxicity reduction on the RealToxicityPrompts benchmark, while preserving general NLP capabilities and text fluency—validating the joint optimization of safety and knowledge retention.
📝 Abstract
Large language models (LLMs) are now ubiquitous in user-facing applications, yet they still generate undesirable toxic outputs, including profanity, vulgarity, and derogatory remarks. Although numerous detoxification methods exist, most apply broad, surface-level fixes and can therefore easily be circumvented by jailbreak attacks. In this paper we leverage sparse autoencoders (SAEs) to identify toxicity-related directions in the residual stream of models and perform targeted activation steering using the corresponding decoder vectors. We introduce three tiers of steering aggressiveness and evaluate them on GPT-2 Small and Gemma-2-2B, revealing trade-offs between toxicity reduction and language fluency. At stronger steering strengths, these causal interventions surpass competitive baselines in reducing toxicity by up to 20%, though fluency can degrade noticeably on GPT-2 Small depending on the aggressiveness. Crucially, standard NLP benchmark scores upon steering remain stable, indicating that the model's knowledge and general abilities are preserved. We further show that feature-splitting in wider SAEs hampers safety interventions, underscoring the importance of disentangled feature learning. Our findings highlight both the promise and the current limitations of SAE-based causal interventions for LLM detoxification, further suggesting practical guidelines for safer language-model deployment.