Compositional Generalisation for Explainable Hate Speech Detection

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hate speech detection models exhibit poor compositional generalization—i.e., fail to robustly classify unseen semantic combinations of known concepts. Method: We propose an interpretable, decoupled modeling paradigm that separates semantic units from contextual dependencies. Specifically, we (i) construct U-PLEAD—the first synthetic dataset based on uniform-distribution compositional expression generation—and a human-validated compositional benchmark; (ii) introduce span-level structured annotations to explicitly supervise learning of composable semantic units; and (iii) integrate synthetic data generation, multi-source mixed training, and structured prompt-based fine-tuning. Results: Our approach achieves state-of-the-art performance on the real-world PLEAD dataset (F1 = 89.7) and significantly improves compositional generalization on the compositional benchmark. Moreover, it enables fine-grained attribution—e.g., identifying targets and derogatory strategies—demonstrating both strong generalization and intrinsic interpretability.

Technology Category

Application Category

📝 Abstract
Hate speech detection is key to online content moderation, but current models struggle to generalise beyond their training data. This has been linked to dataset biases and the use of sentence-level labels, which fail to teach models the underlying structure of hate speech. In this work, we show that even when models are trained with more fine-grained, span-level annotations (e.g.,"artists"is labeled as target and"are parasites"as dehumanising comparison), they struggle to disentangle the meaning of these labels from the surrounding context. As a result, combinations of expressions that deviate from those seen during training remain particularly difficult for models to detect. We investigate whether training on a dataset where expressions occur with equal frequency across all contexts can improve generalisation. To this end, we create U-PLEAD, a dataset of ~364,000 synthetic posts, along with a novel compositional generalisation benchmark of ~8,000 manually validated posts. Training on a combination of U-PLEAD and real data improves compositional generalisation while achieving state-of-the-art performance on the human-sourced PLEAD.
Problem

Research questions and friction points this paper is trying to address.

Improving hate speech detection generalization beyond training data
Addressing dataset biases and span-level annotation challenges
Enhancing model performance with synthetic and real data combination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses span-level annotations for fine-grained detection
Creates synthetic dataset U-PLEAD for balanced training
Combines synthetic and real data for better generalization
🔎 Similar Papers
No similar papers found.