🤖 AI Summary
Detecting hate speech in multilingual social media—particularly in Indian contexts involving code-mixing (e.g., Hindi–English), transliteration, and culture-specific expressions—remains challenging due to linguistic complexity and data scarcity.
Method: We propose a large language model (LLM)-based paradigm, introducing IndoHateMix—the first high-quality, human-annotated benchmark dataset for code-mixed hate speech in Indian online discourse—and systematically evaluate open-source LLMs (e.g., LLaMA-3.1) via few-shot and zero-shot prompting under low-resource settings.
Contribution/Results: LLaMA-3.1 achieves 89.7% F1 on IndoHateMix, outperforming the best fine-tuned BERT-based model by 4.2 percentage points while using only one-tenth of its labeled data. This work is the first to empirically demonstrate that prompt-based LLMs can effectively replace conventional fine-tuning pipelines for low-resource multilingual hate speech detection, establishing both a new methodological framework and a foundational benchmark.
📝 Abstract
Hate speech detection across contemporary social media presents unique challenges due to linguistic diversity and the informal nature of online discourse. These challenges are further amplified in settings involving code-mixing, transliteration, and culturally nuanced expressions. While fine-tuned transformer models, such as BERT, have become standard for this task, we argue that recent large language models (LLMs) not only surpass them but also redefine the landscape of hate speech detection more broadly. To support this claim, we introduce IndoHateMix, a diverse, high-quality dataset capturing Hindi-English code-mixing and transliteration in the Indian context, providing a realistic benchmark to evaluate model robustness in complex multilingual scenarios where existing NLP methods often struggle. Our extensive experiments show that cutting-edge LLMs (such as LLaMA-3.1) consistently outperform task-specific BERT-based models, even when fine-tuned on significantly less data. With their superior generalization and adaptability, LLMs offer a transformative approach to mitigating online hate in diverse environments. This raises the question of whether future works should prioritize developing specialized models or focus on curating richer and more varied datasets to further enhance the effectiveness of LLMs.