🤖 AI Summary
African multilingual hate speech detection faces dual challenges of cultural misinterpretation and data scarcity. To address this, we introduce the first high-quality, culturally grounded dataset covering 15 African languages, annotated exclusively by native speakers within local sociocultural contexts, with sustained community involvement in both annotation and lexicon development. We propose a novel fine-grained, culture-sensitive annotation framework and publicly release a bilingual open-source lexicon, individual annotator metadata, and benchmark classification models—including both traditional machine learning and LLM-finetuning approaches. Experimental results demonstrate that our LLM-augmented methods significantly outperform zero-shot baselines across multiple cross-lingual hate speech classification tasks. This work systematically bridges critical gaps in content moderation for Global South low-resource languages—both in terms of culturally representative data and methodologically robust, community-informed modeling frameworks.
📝 Abstract
Hate speech and abusive language are global phenomena that need socio-cultural background knowledge to be understood, identified, and moderated. However, in many regions of the Global South, there have been several documented occurrences of (1) absence of moderation and (2) censorship due to the reliance on keyword spotting out of context. Further, high-profile individuals have frequently been at the center of the moderation process, while large and targeted hate speech campaigns against minorities have been overlooked. These limitations are mainly due to the lack of high-quality data in the local languages and the failure to include local communities in the collection, annotation, and moderation processes. To address this issue, we present AfriHate: a multilingual collection of hate speech and abusive language datasets in 15 African languages. Each instance in AfriHate is annotated by native speakers familiar with the local culture. We report the challenges related to the construction of the datasets and present various classification baseline results with and without using LLMs. The datasets, individual annotations, and hate speech and offensive language lexicons are available on https://github.com/AfriHate/AfriHate