🤖 AI Summary
To address the performance limitations of speech emotion recognition (SER) for low-resource languages (LRLs) caused by scarce labeled data, this paper proposes a self-supervised pretraining framework that integrates contrastive learning (CL) with Bootstrap Your Own Latent (BYOL). It presents the first systematic evaluation of this hybrid approach on SER across multiple LRLs—Urdu, German, and Bengali. The method substantially enhances cross-lingual transferability, yielding absolute F1-score improvements of 10.6%, 15.2%, and 13.9% respectively. Furthermore, it identifies key linguistic and acoustic factors governing generalization performance. By bridging interpretability and inclusivity in SER research, this work establishes a novel paradigm for robust emotion modeling in low-resource settings, advancing both methodological rigor and equitable AI deployment.
📝 Abstract
Speech Emotion Recognition (SER) has seen significant progress with deep learning, yet remains challenging for Low-Resource Languages (LRLs) due to the scarcity of annotated data. In this work, we explore unsupervised learning to improve SER in low-resource settings. Specifically, we investigate contrastive learning (CL) and Bootstrap Your Own Latent (BYOL) as self-supervised approaches to enhance cross-lingual generalization. Our methods achieve notable F1 score improvements of 10.6% in Urdu, 15.2% in German, and 13.9% in Bangla, demonstrating their effectiveness in LRLs. Additionally, we analyze model behavior to provide insights on key factors influencing performance across languages, and also highlighting challenges in low-resource SER. This work provides a foundation for developing more inclusive, explainable, and robust emotion recognition systems for underrepresented languages.