π€ AI Summary
This work proposes a fully self-reflective detoxification framework for large language models that eliminates reliance on external modules, human annotations, or manual intervention. By uncovering and harnessing the modelβs intrinsic self-detoxification capability, the approach integrates an embedded toxicity signal detector with a systematic intervention pipeline and leverages an iteratively generated contrastive dataset for self-supervised fine-tuning. This yields an end-to-end, externally unassisted mechanism for safe text generation. Evaluated on benchmarks such as DetoxLLM and ParaDetox, the method outperforms current state-of-the-art approaches, achieving substantially improved detoxification performance while effectively preserving semantic fidelity.
π Abstract
Recent breakthroughs in Large Language Models (LLMs) have revealed remarkable generative capabilities and emerging self-regulatory mechanisms, including self-correction and self-rewarding. However, current detoxification techniques rarely exploit these built-in abilities; instead, they rely on external modules, labor-intensive data annotation, or human intervention --factors that hinder scalability and consistency. In this paper, we introduce a fully self-reflective detoxification framework that harnesses the inherent capacities of LLMs to detect, correct toxic content, and refine LLMs without external modules and data annotation. Specifically, we propose a Toxic Signal Detector --an internal self-identification mechanism, coupled with a systematic intervention process to transform toxic text into its non-toxic counterpart. This iterative procedure yields a contrastive detoxification dataset used to fine-tune the model, enhancing its ability for safe and coherent text generation. Experiments on benchmark datasets such as DetoxLLM and ParaDetox show that our method achieves better detoxification performance than state-of-the-art methods while preserving semantic fidelity. By obviating the need for human intervention or external components, this paper reveals the intrinsic self-detoxification ability of LLMs, offering a consistent and effective approach for mitigating harmful content generation. Ultimately, our findings underscore the potential for truly self-regulated language models, paving the way for more responsible and ethically guided text generation systems.