Breaking Bad Tokens: Detoxification of LLMs Using Sparse Autoencoders

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate toxic content (e.g., abusive or vulgar language), and existing detoxification methods are vulnerable to jailbreak attacks. Method: This paper proposes a causality-aware toxicity intervention framework based on sparse autoencoders (SAEs). It is the first to employ SAEs to identify neuron directions in the residual stream strongly correlated with toxicity, and introduces a three-level activation-guided mechanism for targeted intervention. The work further reveals that feature splitting harms safety interventions and underscores the necessity of disentangled feature learning. Results: Experiments on GPT-2 Small and Gemma-2-2B demonstrate up to 20% toxicity reduction on the RealToxicityPrompts benchmark, while preserving general NLP capabilities and text fluency—validating the joint optimization of safety and knowledge retention.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are now ubiquitous in user-facing applications, yet they still generate undesirable toxic outputs, including profanity, vulgarity, and derogatory remarks. Although numerous detoxification methods exist, most apply broad, surface-level fixes and can therefore easily be circumvented by jailbreak attacks. In this paper we leverage sparse autoencoders (SAEs) to identify toxicity-related directions in the residual stream of models and perform targeted activation steering using the corresponding decoder vectors. We introduce three tiers of steering aggressiveness and evaluate them on GPT-2 Small and Gemma-2-2B, revealing trade-offs between toxicity reduction and language fluency. At stronger steering strengths, these causal interventions surpass competitive baselines in reducing toxicity by up to 20%, though fluency can degrade noticeably on GPT-2 Small depending on the aggressiveness. Crucially, standard NLP benchmark scores upon steering remain stable, indicating that the model's knowledge and general abilities are preserved. We further show that feature-splitting in wider SAEs hampers safety interventions, underscoring the importance of disentangled feature learning. Our findings highlight both the promise and the current limitations of SAE-based causal interventions for LLM detoxification, further suggesting practical guidelines for safer language-model deployment.
Problem

Research questions and friction points this paper is trying to address.

Identify toxicity-related directions in LLMs using sparse autoencoders
Reduce toxic outputs by targeted activation steering with trade-offs
Preserve model knowledge while detoxifying via causal interventions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using sparse autoencoders to detect toxicity
Targeted activation steering with decoder vectors
Three-tier steering for toxicity reduction
🔎 Similar Papers
No similar papers found.
Agam Goyal
Agam Goyal
CS PhD Student, University of Illinois Urbana-Champaign
Natural Language ProcessingHuman-AI InteractionSocial ComputingComputational Social Science
V
Vedant Rathi
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign
W
William Yeh
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign
Y
Yian Wang
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign
Yuen Chen
Yuen Chen
University of Illinois at Urbana-Champaign
Machine LearningCausalityTrustworthy ML
H
Hari Sundaram
Siebel School of Computing and Data Science, University of Illinois Urbana-Champaign