Cleansing the Artificial Mind: A Self-Reflective Detoxification Framework for Large Language Models

πŸ“… 2026-01-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a fully self-reflective detoxification framework for large language models that eliminates reliance on external modules, human annotations, or manual intervention. By uncovering and harnessing the model’s intrinsic self-detoxification capability, the approach integrates an embedded toxicity signal detector with a systematic intervention pipeline and leverages an iteratively generated contrastive dataset for self-supervised fine-tuning. This yields an end-to-end, externally unassisted mechanism for safe text generation. Evaluated on benchmarks such as DetoxLLM and ParaDetox, the method outperforms current state-of-the-art approaches, achieving substantially improved detoxification performance while effectively preserving semantic fidelity.

Technology Category

Application Category

πŸ“ Abstract
Recent breakthroughs in Large Language Models (LLMs) have revealed remarkable generative capabilities and emerging self-regulatory mechanisms, including self-correction and self-rewarding. However, current detoxification techniques rarely exploit these built-in abilities; instead, they rely on external modules, labor-intensive data annotation, or human intervention --factors that hinder scalability and consistency. In this paper, we introduce a fully self-reflective detoxification framework that harnesses the inherent capacities of LLMs to detect, correct toxic content, and refine LLMs without external modules and data annotation. Specifically, we propose a Toxic Signal Detector --an internal self-identification mechanism, coupled with a systematic intervention process to transform toxic text into its non-toxic counterpart. This iterative procedure yields a contrastive detoxification dataset used to fine-tune the model, enhancing its ability for safe and coherent text generation. Experiments on benchmark datasets such as DetoxLLM and ParaDetox show that our method achieves better detoxification performance than state-of-the-art methods while preserving semantic fidelity. By obviating the need for human intervention or external components, this paper reveals the intrinsic self-detoxification ability of LLMs, offering a consistent and effective approach for mitigating harmful content generation. Ultimately, our findings underscore the potential for truly self-regulated language models, paving the way for more responsible and ethically guided text generation systems.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Detoxification
Toxic Content
Self-Regulation
Harmful Content Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-reflective detoxification
toxic signal detector
intrinsic self-regulation
contrastive detoxification dataset
large language models
πŸ”Ž Similar Papers
No similar papers found.