Detection of Somali-written Fake News and Toxic Messages on the Social Media Using Transformer-based Language Models

πŸ“… 2025-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Low-resource languages like Somali face significant challenges in detecting misinformation and toxic content, primarily due to scarce annotated data and the absence of language-specific pretrained models. To address this, we introduce SomBERTaβ€”the first monolingual Transformer-based pretrained language model (PLM) tailored for Somali social media text. Built upon a manually curated, high-quality, domain-adapted dataset jointly annotated for fake news and toxicity classification, SomBERTa employs self-supervised pretraining followed by multi-task supervised fine-tuning. In a three-task joint evaluation, SomBERTa achieves a mean accuracy of 87.99%, substantially outperforming multilingual baselines such as AfriBERTa and AfroXLMR. Our contributions include: (1) the first multi-task monolingual PLM for Somali; (2) the first publicly available dual-labeled Somali dataset for fake news and toxicity detection; and (3) a transferable methodology framework for AI governance in low-resource language settings.

Technology Category

Application Category

πŸ“ Abstract
The fact that everyone with a social media account can create and share content, and the increasing public reliance on social media platforms as a news and information source bring about significant challenges such as misinformation, fake news, harmful content, etc. Although human content moderation may be useful to an extent and used by these platforms to flag posted materials, the use of AI models provides a more sustainable, scalable, and effective way to mitigate these harmful contents. However, low-resourced languages such as the Somali language face limitations in AI automation, including scarce annotated training datasets and lack of language models tailored to their unique linguistic characteristics. This paper presents part of our ongoing research work to bridge some of these gaps for the Somali language. In particular, we created two human-annotated social-media-sourced Somali datasets for two downstream applications, fake news &toxicity classification, and developed a transformer-based monolingual Somali language model (named SomBERTa) -- the first of its kind to the best of our knowledge. SomBERTa is then fine-tuned and evaluated on toxic content, fake news and news topic classification datasets. Comparative evaluation analysis of the proposed model against related multilingual models (e.g., AfriBERTa, AfroXLMR, etc) demonstrated that SomBERTa consistently outperformed these comparators in both fake news and toxic content classification tasks while achieving the best average accuracy (87.99%) across all tasks. This research contributes to Somali NLP by offering a foundational language model and a replicable framework for other low-resource languages, promoting digital and AI inclusivity and linguistic diversity.
Problem

Research questions and friction points this paper is trying to address.

Detecting Somali fake news and toxic content on social media
Addressing lack of AI resources for low-resource Somali language
Developing first monolingual Somali model for content classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created annotated Somali datasets for fake news and toxicity
Developed first Somali monolingual transformer model (SomBERTa)
SomBERTa outperformed multilingual models in classification tasks
πŸ”Ž Similar Papers
No similar papers found.
Muhidin A. Mohamed
Muhidin A. Mohamed
Aston University
Natural Language processingData ScienceInformation RetrievalArtificial IntelligenceCommunication Networks
S
Shuab D. Ahmed
Jamhuriya University, Mogadishu, Somalia
Y
Yahye A. Isse
Jamhuriya University, Mogadishu, Somalia
H
Hanad M. Mohamed
Jamhuriya University, Mogadishu, Somalia
F
Fuad M. Hassan
Somali National University, Mogadishu, Somalia
H
Houssein A. Assowe
University of Djibouti, Balbala, Djibouti