🤖 AI Summary
Indonesian hate speech detection has long suffered from scarce annotated data for marginalized groups (e.g., Shi’a Muslims, LGBTQ+ individuals, ethnic minorities), inadequate modeling of annotator subjectivity, and biased target-group representations. To address these issues, we introduce the first Indonesian dataset specifically designed for marginalized communities—comprising 43,692 election-related hateful and toxic texts—annotated by 19 diverse annotators using a multi-perspective scheme. Our work is the first to systematically integrate demographic attributes into Indonesian hate speech detection, enabling fine-grained target modeling and zero-shot generalization. We empirically demonstrate their differential impact: incorporating demographic features boosts zero-shot performance of GPT-3.5-turbo (+4.2% macro-F1) but harms fine-tuned models when over-specified. Evaluated across seven binary classification tasks, our best model achieves a macro-F1 of 0.78.
📝 Abstract
Hate speech poses a significant threat to social harmony. Over the past two years, Indonesia has seen a ten-fold increase in the online hate speech ratio, underscoring the urgent need for effective detection mechanisms. However, progress is hindered by the limited availability of labeled data for Indonesian texts. The condition is even worse for marginalized minorities, such as Shia, LGBTQ, and other ethnic minorities because hate speech is underreported and less understood by detection tools. Furthermore, the lack of accommodation for subjectivity in current datasets compounds this issue. To address this, we introduce IndoToxic2024, a comprehensive Indonesian hate speech and toxicity classification dataset. Comprising 43,692 entries annotated by 19 diverse individuals, the dataset focuses on texts targeting vulnerable groups in Indonesia, specifically during the hottest political event in the country: the presidential election. We establish baselines for seven binary classification tasks, achieving a macro-F1 score of 0.78 with a BERT model (IndoBERTweet) fine-tuned for hate speech classification. Furthermore, we demonstrate how incorporating demographic information can enhance the zero-shot performance of the large language model, gpt-3.5-turbo. However, we also caution that an overemphasis on demographic information can negatively impact the fine-tuned model performance due to data fragmentation.