Do Prompts Guarantee Safety? Mitigating Toxicity from LLM Generations through Subspace Intervention

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models can generate subtle, context-dependent toxic content even in response to harmless prompts—behavior that evades detection by conventional token- or sentence-level moderation techniques. To tackle this issue, the authors propose a targeted intervention strategy operating in the model’s representational subspace: by identifying and suppressing latent toxic subspaces, the method achieves precise toxicity control without compromising text fluency. This approach overcomes the longstanding trade-off between safety and generation quality, significantly outperforming existing baselines on the RealToxicityPrompts benchmark. Specifically, it reduces toxicity by 8–20% compared to state-of-the-art detoxification systems while maintaining comparable fluency and incurring negligible additional inference overhead.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are powerful text generators, yet they can produce toxic or harmful content even when given seemingly harmless prompts. This presents a serious safety challenge and can cause real-world harm. Toxicity is often subtle and context-dependent, making it difficult to detect at the token level or through coarse sentence-level signals. Moreover, efforts to mitigate toxicity often face a trade-off between safety and the coherence, or fluency of the generated text. In this work, we present a targeted subspace intervention strategy for identifying and suppressing hidden toxic patterns from underlying model representations, while preserving overall ability to generate safe fluent content. On the RealToxicityPrompts, our method achieves strong mitigation performance compared to existing baselines, with minimal impact on inference complexity. Across multiple LLMs, our approach reduces toxicity of state-of-the-art detoxification systems by 8-20%, while maintaining comparable fluency. Through extensive quantitative and qualitative analyses, we show that our approach achieves effective toxicity reduction without impairing generative performance, consistently outperforming existing baselines.
Problem

Research questions and friction points this paper is trying to address.

toxicity
large language models
safety
harmful content
prompt
Innovation

Methods, ideas, or system contributions that make the work stand out.

subspace intervention
toxicity mitigation
large language models
safety alignment
representation editing
🔎 Similar Papers
No similar papers found.
H
Himanshu Singh
Department of Computer Science and Engineering, IIIT Delhi, India
Ziwei Xu
Ziwei Xu
National University of Singapore
Machine LearningKnowledge RepresentationAI Safety
A
A. V. Subramanyam
Department of Electronics and Communications Engineering, IIIT Delhi, India
M
Mohan Kankanhalli
School of Computing, National University of Singapore