🤖 AI Summary
This study addresses hope speech detection in English and German contexts, aiming to enhance the automatic identification of positive expressions in multilingual settings. For the first time, we apply RoBERTa (monolingual) and XLM-RoBERTa (English–German bilingual) to this task, leveraging their Transformer-based architectures and fine-tuning strategies for effective detection. Experimental results demonstrate that the proposed approach achieves an accuracy of 81.8% and a weighted F1-score of 0.818 on the English dataset, and 78.5% accuracy with a weighted F1-score of 0.786 in the multilingual English–German setting. These findings confirm the efficacy of pretrained language models in recognizing positively valenced text and highlight their strong potential for cross-lingual transfer in affective computing tasks.
📝 Abstract
This paper describes a system that has been submitted to the"PolyHope-M"at RANLP2025. In this work various transformers have been implemented and evaluated for hope speech detection for English and Germany. RoBERTa has been implemented for English, while the multilingual model XLM-RoBERTa has been implemented for both English and German languages. The proposed system using RoBERTa reported a weighted f1-score of 0.818 and an accuracy of 81.8% for English. On the other hand, XLM-RoBERTa achieved a weighted f1-score of 0.786 and an accuracy of 78.5%. These results reflects the importance of improvement of pre-trained large language models and how these models enhancing the performance of different natural language processing tasks.