🤖 AI Summary
This study addresses Arabic hate speech detection by systematically evaluating six state-of-the-art Arabic BERT models (e.g., AraBERT, mARBERT) and two ensemble strategies—majority voting and weighted averaging. It presents the first empirical, cross-model comparison of multiple Transformer-based architectures for this task, demonstrating that ensembling substantially enhances model robustness and generalization. Experiments employ 5-fold cross-validation on a standard benchmark dataset, achieving an F1-score of 0.60 and accuracy of 0.86 on the test set; majority voting yields the best performance on the training set. The key contributions are: (1) establishing the first multi-model benchmarking framework specifically designed for Arabic hate speech detection; and (2) empirically validating the efficacy of ensemble methods for fine-grained classification in low-resource languages, thereby providing a reproducible methodological foundation for future research.
📝 Abstract
This paper describes our participation in the shared task of hate speech detection, which is one of the subtasks of the CERIST NLP Challenge 2022. Our experiments evaluate the performance of six transformer models and their combination using 2 ensemble approaches. The best results on the training set, in a five-fold cross validation scenario, were obtained by using the ensemble approach based on the majority vote. The evaluation of this approach on the test set resulted in an F1-score of 0.60 and an Accuracy of 0.86.