🤖 AI Summary
Existing hate speech detection models exhibit poor compositional generalization—i.e., fail to robustly classify unseen semantic combinations of known concepts. Method: We propose an interpretable, decoupled modeling paradigm that separates semantic units from contextual dependencies. Specifically, we (i) construct U-PLEAD—the first synthetic dataset based on uniform-distribution compositional expression generation—and a human-validated compositional benchmark; (ii) introduce span-level structured annotations to explicitly supervise learning of composable semantic units; and (iii) integrate synthetic data generation, multi-source mixed training, and structured prompt-based fine-tuning. Results: Our approach achieves state-of-the-art performance on the real-world PLEAD dataset (F1 = 89.7) and significantly improves compositional generalization on the compositional benchmark. Moreover, it enables fine-grained attribution—e.g., identifying targets and derogatory strategies—demonstrating both strong generalization and intrinsic interpretability.
📝 Abstract
Hate speech detection is key to online content moderation, but current models struggle to generalise beyond their training data. This has been linked to dataset biases and the use of sentence-level labels, which fail to teach models the underlying structure of hate speech. In this work, we show that even when models are trained with more fine-grained, span-level annotations (e.g.,"artists"is labeled as target and"are parasites"as dehumanising comparison), they struggle to disentangle the meaning of these labels from the surrounding context. As a result, combinations of expressions that deviate from those seen during training remain particularly difficult for models to detect. We investigate whether training on a dataset where expressions occur with equal frequency across all contexts can improve generalisation. To this end, we create U-PLEAD, a dataset of ~364,000 synthetic posts, along with a novel compositional generalisation benchmark of ~8,000 manually validated posts. Training on a combination of U-PLEAD and real data improves compositional generalisation while achieving state-of-the-art performance on the human-sourced PLEAD.