HatePrototypes: Interpretable and Transferable Representations for Implicit and Explicit Hate Speech Detection

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hate speech detection models primarily target overt expressions, struggling to identify covert forms—such as derogatory analogies and subtle discrimination—that demand deep semantic understanding rather than surface-level features. To address this, we propose HatePrototypes: a novel framework that constructs class-level prototype vector representations using safety-aligned language models, enabling the first interpretable and transferable modeling of hate semantics. HatePrototypes achieves efficient cross-dataset transfer between explicit and implicit hate detection tasks with only 50 labeled samples per class. It further incorporates a parameter-free early-exit mechanism to accelerate inference. Extensive evaluations across multiple benchmarks demonstrate substantial performance gains on both explicit and implicit hate detection, strong transfer robustness, and consistent generalization. To foster reproducible, efficient, and interpretable research, we publicly release our code and resources.

Technology Category

Application Category

📝 Abstract
Optimization of offensive content moderation models for different types of hateful messages is typically achieved through continued pre-training or fine-tuning on new hate speech benchmarks. However, existing benchmarks mainly address explicit hate toward protected groups and often overlook implicit or indirect hate, such as demeaning comparisons, calls for exclusion or violence, and subtle discriminatory language that still causes harm. While explicit hate can often be captured through surface features, implicit hate requires deeper, full-model semantic processing. In this work, we question the need for repeated fine-tuning and analyze the role of HatePrototypes, class-level vector representations derived from language models optimized for hate speech detection and safety moderation. We find that these prototypes, built from as few as 50 examples per class, enable cross-task transfer between explicit and implicit hate, with interchangeable prototypes across benchmarks. Moreover, we show that parameter-free early exiting with prototypes is effective for both hate types. We release the code, prototype resources, and evaluation scripts to support future research on efficient and transferable hate speech detection.
Problem

Research questions and friction points this paper is trying to address.

Detecting implicit hate speech beyond explicit surface features
Enabling cross-task transfer between explicit and implicit hate detection
Developing interpretable representations for efficient hate speech moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

HatePrototypes enable cross-task hate speech transfer
Parameter-free early exiting with prototypes is effective
Class-level vector representations from optimized language models
🔎 Similar Papers
No similar papers found.