🤖 AI Summary
This study addresses hope speech detection in multilingual low-resource settings. We propose an efficient framework integrating active learning with multilingual Transformer models (mBERT and XLM-RoBERTa), enabling high-performance classification with minimal labeled data. By iteratively selecting the most informative samples for annotation and retraining, our approach significantly enhances model generalization and cross-lingual transferability. Experiments span four languages—English, Spanish, German, and Urdu—on a multilingual benchmark; the XLM-RoBERTa + active learning variant achieves the highest accuracy, outperforming traditional logistic regression and baseline Transformer models. Our key contributions are threefold: (1) the first systematic application of active learning to multilingual hope speech detection; (2) empirical validation of its effectiveness, robustness, and scalability under low-resource constraints; and (3) provision of a reusable technical pathway for fostering positive online discourse ecosystems.
📝 Abstract
Hope speech language that fosters encouragement and optimism plays a vital role in promoting positive discourse online. However, its detection remains challenging, especially in multilingual and low-resource settings. This paper presents a multilingual framework for hope speech detection using an active learning approach and transformer-based models, including mBERT and XLM-RoBERTa. Experiments were conducted on datasets in English, Spanish, German, and Urdu, including benchmark test sets from recent shared tasks. Our results show that transformer models significantly outperform traditional baselines, with XLM-RoBERTa achieving the highest overall accuracy. Furthermore, our active learning strategy maintained strong performance even with small annotated datasets. This study highlights the effectiveness of combining multilingual transformers with data-efficient training strategies for hope speech detection.