xList-Hate: A Checklist-Based Framework for Interpretable and Generalizable Hate Speech Detection

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing hate speech detection methods, which predominantly rely on binary classification paradigms prone to overfitting dataset-specific definitions and struggling under domain shifts or label noise. The authors propose the first diagnostic framework grounded in a normative checklist: leveraging large language models (LLMs) in a zero-shot manner to answer a series of predefined conceptual questions, thereby generating interpretable binary diagnostic features. These features are then aggregated via a lightweight decision tree to produce the final prediction. The approach significantly outperforms conventional supervised fine-tuning and zero-shot classification across multiple benchmarks, demonstrating enhanced robustness across datasets, reduced sensitivity to annotation inconsistencies, and delivering transparent, auditable, fine-grained decision pathways.

Technology Category

Application Category

📝 Abstract
Hate speech detection is commonly framed as a direct binary classification problem despite being a composite concept defined through multiple interacting factors that vary across legal frameworks, platform policies, and annotation guidelines. As a result, supervised models often overfit dataset-specific definitions and exhibit limited robustness under domain shift and annotation noise. We introduce xList-Hate, a diagnostic framework that decomposes hate speech detection into a checklist of explicit, concept-level questions grounded in widely shared normative criteria. Each question is independently answered by a large language model (LLM), producing a binary diagnostic representation that captures hateful content features without directly predicting the final label. These diagnostic signals are then aggregated by a lightweight, fully interpretable decision tree, yielding transparent and auditable predictions. We evaluate it across multiple hate speech benchmarks and model families, comparing it against zero-shot LLM classification and in-domain supervised fine-tuning. While supervised methods typically maximize in-domain performance, we consistently improves cross-dataset robustness and relative performance under domain shift. In addition, qualitative analysis of disagreement cases provides evidence that the framework can be less sensitive to certain forms of annotation inconsistency and contextual ambiguity. Crucially, the approach enables fine-grained interpretability through explicit decision paths and factor-level analysis. Our results suggest that reframing hate speech detection as a diagnostic reasoning task, rather than a monolithic classification problem, provides a robust, explainable, and extensible alternative for content moderation.
Problem

Research questions and friction points this paper is trying to address.

hate speech detection
domain shift
annotation noise
interpretability
generalizability
Innovation

Methods, ideas, or system contributions that make the work stand out.

checklist-based framework
interpretable hate speech detection
diagnostic reasoning
large language models
cross-dataset robustness
🔎 Similar Papers
No similar papers found.