LLM in the Loop: Creating the PARADEHATE Dataset for Hate Speech Detoxification

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the scarcity of high-quality parallel data for hate speech detoxification, this work introduces an LLM-in-the-loop automated data construction paradigm. It pioneers the integration of the lightweight large language model GPT-4o-mini into the data generation loop—replacing costly and ethically sensitive human annotation. This yields PARADEHATE, the first high-quality, 8K-scale parallel benchmark dataset specifically designed for hate speech detoxification. Leveraging PARADEHATE, we fine-tune a BART model and achieve significant improvements across three key evaluation dimensions: stylistic accuracy, content preservation rate, and linguistic fluency. Empirical results demonstrate that LLM-generated detoxified outputs attain human-level performance in this sensitive domain, exhibiting both high reliability and strong scalability.

Technology Category

Application Category

📝 Abstract
Detoxification, the task of rewriting harmful language into non-toxic text, has become increasingly important amid the growing prevalence of toxic content online. However, high-quality parallel datasets for detoxification, especially for hate speech, remain scarce due to the cost and sensitivity of human annotation. In this paper, we propose a novel LLM-in-the-loop pipeline leveraging GPT-4o-mini for automated detoxification. We first replicate the ParaDetox pipeline by replacing human annotators with an LLM and show that the LLM performs comparably to human annotation. Building on this, we construct PARADEHATE, a large-scale parallel dataset specifically for hatespeech detoxification. We release PARADEHATE as a benchmark of over 8K hate/non-hate text pairs and evaluate a wide range of baseline methods. Experimental results show that models such as BART, fine-tuned on PARADEHATE, achieve better performance in style accuracy, content preservation, and fluency, demonstrating the effectiveness of LLM-generated detoxification text as a scalable alternative to human annotation.
Problem

Research questions and friction points this paper is trying to address.

Lack of high-quality parallel datasets for hate speech detoxification
High cost and sensitivity of human annotation for toxic content
Need for scalable automated detoxification methods using LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-in-the-loop pipeline for detoxification
GPT-4o-mini replaces human annotators
PARADEHATE dataset for hatespeech detoxification
🔎 Similar Papers
No similar papers found.