Projecting Out the Malice: A Global Subspace Approach to LLM Detoxification

๐Ÿ“… 2026-01-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models are prone to generating toxic content, and existing alignment methods struggle to fully eliminate toxicity embedded in model parameters while remaining vulnerable to adversarial attacks. This work proposes GLOSS, a novel approach that introduces the concept of a โ€œglobal toxicity subspaceโ€ for the first time. Leveraging mechanistic interpretability, GLOSS precisely identifies and removes this subspace within feedforward network (FFN) layers, overcoming limitations of local methods that are susceptible to reconstruction or noise interference. Requiring no extensive retraining and incurring minimal computational overhead, GLOSS achieves state-of-the-art detoxification performance on large models such as Qwen3 while effectively preserving their general capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) exhibit exceptional performance but pose inherent risks of generating toxic content, restricting their safe deployment. While traditional methods (e.g., alignment) adjust output preferences, they fail to eliminate underlying toxic regions in parameters, leaving models vulnerable to adversarial attacks. Prior mechanistic studies characterize toxic regions as"toxic vectors"or"layer-wise subspaces", yet our analysis identifies critical limitations: i) Removed toxic vectors can be reconstructed via linear combinations of non-toxic vectors, demanding targeting of entire toxic subspace; ii) Contrastive objective over limited samples inject noise into layer-wise subspaces, hindering stable extraction. These highlight the challenge of identifying robust toxic subspace and removing them. Therefore, we propose GLOSS (GLobal tOxic Subspace Suppression), a lightweight method that mitigates toxicity by identifying and eliminating this global subspace from FFN parameters. Experiments on LLMs (e.g., Qwen3) show GLOSS achieves SOTA detoxification while preserving general capabilities without requiring large-scale retraining. WARNING: This paper contains context which is toxic in nature.
Problem

Research questions and friction points this paper is trying to address.

toxicity
large language models
subspace
detoxification
adversarial robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

toxic subspace
global subspace suppression
LLM detoxification
mechanistic interpretability
parameter space intervention
๐Ÿ”Ž Similar Papers
No similar papers found.
Zenghao Duan
Zenghao Duan
CAS Key Laboratory of AI Safety, Institute of Computing Technology, CAS
large language model
Z
Zhiyi Yin
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Zhichao Shi
Zhichao Shi
School of Advanced Interdisciplinary; Institute of Computing Technology, Chinese Academy of Sciences
Liang Pang
Liang Pang
Associate Professor, Institute of Computing Technology, Chinese Academy of Sciences
Large Language ModelSemantic MatchingQuestion AnsweringText MatchingText Generation
S
Shaoling Jing
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Z
Zihe Huang
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
J
Jiayi Wu
Dalian University of Technology
Y
Yu Yan
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Jingcheng Deng
Jingcheng Deng
Institute of Computing Technology, Chinese Academy of Sciences
Retrieval-Augmented ModelLLM Multi-Agent
H
Huawei Shen
State Key Laboratory of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Xueqi Cheng
Xueqi Cheng
Ph.D. student, Florida State University
Data miningLLMGNNComputational social science