🤖 AI Summary
This work addresses the challenge that large language models can generate subtle, context-dependent toxic content even in response to harmless prompts—behavior that evades detection by conventional token- or sentence-level moderation techniques. To tackle this issue, the authors propose a targeted intervention strategy operating in the model’s representational subspace: by identifying and suppressing latent toxic subspaces, the method achieves precise toxicity control without compromising text fluency. This approach overcomes the longstanding trade-off between safety and generation quality, significantly outperforming existing baselines on the RealToxicityPrompts benchmark. Specifically, it reduces toxicity by 8–20% compared to state-of-the-art detoxification systems while maintaining comparable fluency and incurring negligible additional inference overhead.
📝 Abstract
Large Language Models (LLMs) are powerful text generators, yet they can produce toxic or harmful content even when given seemingly harmless prompts. This presents a serious safety challenge and can cause real-world harm. Toxicity is often subtle and context-dependent, making it difficult to detect at the token level or through coarse sentence-level signals. Moreover, efforts to mitigate toxicity often face a trade-off between safety and the coherence, or fluency of the generated text. In this work, we present a targeted subspace intervention strategy for identifying and suppressing hidden toxic patterns from underlying model representations, while preserving overall ability to generate safe fluent content. On the RealToxicityPrompts, our method achieves strong mitigation performance compared to existing baselines, with minimal impact on inference complexity. Across multiple LLMs, our approach reduces toxicity of state-of-the-art detoxification systems by 8-20%, while maintaining comparable fluency. Through extensive quantitative and qualitative analyses, we show that our approach achieves effective toxicity reduction without impairing generative performance, consistently outperforming existing baselines.