Learning More with Less: Self-Supervised Approaches for Low-Resource Speech Emotion Recognition

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance limitations of speech emotion recognition (SER) for low-resource languages (LRLs) caused by scarce labeled data, this paper proposes a self-supervised pretraining framework that integrates contrastive learning (CL) with Bootstrap Your Own Latent (BYOL). It presents the first systematic evaluation of this hybrid approach on SER across multiple LRLs—Urdu, German, and Bengali. The method substantially enhances cross-lingual transferability, yielding absolute F1-score improvements of 10.6%, 15.2%, and 13.9% respectively. Furthermore, it identifies key linguistic and acoustic factors governing generalization performance. By bridging interpretability and inclusivity in SER research, this work establishes a novel paradigm for robust emotion modeling in low-resource settings, advancing both methodological rigor and equitable AI deployment.

Technology Category

Application Category

📝 Abstract
Speech Emotion Recognition (SER) has seen significant progress with deep learning, yet remains challenging for Low-Resource Languages (LRLs) due to the scarcity of annotated data. In this work, we explore unsupervised learning to improve SER in low-resource settings. Specifically, we investigate contrastive learning (CL) and Bootstrap Your Own Latent (BYOL) as self-supervised approaches to enhance cross-lingual generalization. Our methods achieve notable F1 score improvements of 10.6% in Urdu, 15.2% in German, and 13.9% in Bangla, demonstrating their effectiveness in LRLs. Additionally, we analyze model behavior to provide insights on key factors influencing performance across languages, and also highlighting challenges in low-resource SER. This work provides a foundation for developing more inclusive, explainable, and robust emotion recognition systems for underrepresented languages.
Problem

Research questions and friction points this paper is trying to address.

Improving low-resource speech emotion recognition with self-supervised learning
Enhancing cross-lingual generalization using contrastive learning and BYOL
Addressing data scarcity challenges in underrepresented language emotion recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for low-resource SER
Contrastive learning enhances cross-lingual generalization
BYOL method improves F1 scores significantly
🔎 Similar Papers
No similar papers found.