Multilingual Hope Speech Detection: A Comparative Study of Logistic Regression, mBERT, and XLM-RoBERTa with Active Learning

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses hope speech detection in multilingual low-resource settings. We propose an efficient framework integrating active learning with multilingual Transformer models (mBERT and XLM-RoBERTa), enabling high-performance classification with minimal labeled data. By iteratively selecting the most informative samples for annotation and retraining, our approach significantly enhances model generalization and cross-lingual transferability. Experiments span four languages—English, Spanish, German, and Urdu—on a multilingual benchmark; the XLM-RoBERTa + active learning variant achieves the highest accuracy, outperforming traditional logistic regression and baseline Transformer models. Our key contributions are threefold: (1) the first systematic application of active learning to multilingual hope speech detection; (2) empirical validation of its effectiveness, robustness, and scalability under low-resource constraints; and (3) provision of a reusable technical pathway for fostering positive online discourse ecosystems.

Technology Category

Application Category

📝 Abstract
Hope speech language that fosters encouragement and optimism plays a vital role in promoting positive discourse online. However, its detection remains challenging, especially in multilingual and low-resource settings. This paper presents a multilingual framework for hope speech detection using an active learning approach and transformer-based models, including mBERT and XLM-RoBERTa. Experiments were conducted on datasets in English, Spanish, German, and Urdu, including benchmark test sets from recent shared tasks. Our results show that transformer models significantly outperform traditional baselines, with XLM-RoBERTa achieving the highest overall accuracy. Furthermore, our active learning strategy maintained strong performance even with small annotated datasets. This study highlights the effectiveness of combining multilingual transformers with data-efficient training strategies for hope speech detection.
Problem

Research questions and friction points this paper is trying to address.

Detecting encouraging online content across multiple languages efficiently
Addressing detection challenges in low-resource multilingual settings
Improving hope speech identification using advanced transformer models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual transformer models like mBERT and XLM-RoBERTa
Active learning strategy for data-efficient training
Comparative framework tested on multiple languages including Urdu
🔎 Similar Papers
No similar papers found.
T
T. O. Abiola
Instituto Politécnico Nacional, Centro de Investigación en Computación, CDMX, Mexico
K
K. D. Abiodun
Ekiti State University, Ado-Ekiti, Nigeria
O
O. E. Olumide
Instituto Politécnico Nacional, Centro de Investigación en Computación, CDMX, Mexico
O
O. O. Adebanji
Instituto Politécnico Nacional, Centro de Investigación en Computación, CDMX, Mexico
O
O. H. Calvo
Instituto Politécnico Nacional, Centro de Investigación en Computación, CDMX, Mexico
Grigori Sidorov
Grigori Sidorov
Professor of Computational Linguistics, Instituto Politécnico Nacional (IPN), Mexico
Computational LinguisticsNatural Language ProcessingArtificial IntelligenceMachine Learning