HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter

📅 2024-11-23
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing online hate speech detection models exhibit poor generalizability in real-world settings due to severe geographic and linguistic biases. Method: We introduce HateDay—the first globally representative, single-day Twitter dataset (full sampling on September 21, 2022), covering eight languages and four English-speaking countries, with random full-scale sampling, multilingual human annotation, and cross-regional distribution analysis. Contribution/Results: Our systematic evaluation reveals two fundamental biases in mainstream academic datasets: (1) misalignment of targeted groups and (2) conflation of hate speech with offensive language—leading to substantial overestimation of model performance. Empirical results show that state-of-the-art detectors achieve F1 scores below 0.3 on real-world data, with particularly poor performance on non-European languages. Automated moderation is thus unreliable, necessitating human-in-the-loop approaches. HateDay establishes a new benchmark evaluation framework grounded in realistic data distributions.

Technology Category

Application Category

📝 Abstract
To address the global challenge of online hate speech, prior research has developed detection models to flag such content on social media. However, due to systematic biases in evaluation datasets, the real-world effectiveness of these models remains unclear, particularly across geographies. We introduce HateDay, the first global hate speech dataset representative of social media settings, constructed from a random sample of all tweets posted on September 21, 2022 and covering eight languages and four English-speaking countries. Using HateDay, we uncover substantial variation in the prevalence and composition of hate speech across languages and regions. We show that evaluations on academic datasets greatly overestimate real-world detection performance, which we find is very low, especially for non-European languages. Our analysis identifies key drivers of this gap, including models' difficulty to distinguish hate from offensive speech and a mismatch between the target groups emphasized in academic datasets and those most frequently targeted in real-world settings. We argue that poor model performance makes public models ill-suited for automatic hate speech moderation and find that high moderation rates are only achievable with substantial human oversight. Our results underscore the need to evaluate detection systems on data that reflects the complexity and diversity of real-world social media.
Problem

Research questions and friction points this paper is trying to address.

Evaluating hate speech detection models' real-world effectiveness across geographies
Addressing biases in hate speech datasets for accurate global representation
Improving model performance to distinguish hate from offensive speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

First global hate speech dataset from Twitter
Evaluates detection models across eight languages
Identifies gaps in academic vs real-world performance
🔎 Similar Papers
No similar papers found.