🤖 AI Summary
This work addresses the insufficient cross-lingual robustness of speech emotion recognition (SER) for English and Southeast Asian languages. We propose a multitask speech emotion understanding model that jointly models discrete emotion categories (e.g., happiness, anger) and continuous affective dimensions (arousal, valence, dominance). To unify classification and regression optimization, we introduce a novel hybrid objective function combining weighted cross-entropy loss with Concordance Correlation Coefficient (CCC) loss. The model employs a lightweight speech encoder architecture to enable efficient multilingual representation learning. Evaluated on the Singapore Multilingual Speech Emotion Corpus and multiple public benchmarks, our approach consistently outperforms state-of-the-art open-source speech encoders and large audio foundation models. Results demonstrate superior cross-lingual generalization and fine-grained affective modeling capability, validating both effectiveness and robustness in low-resource multilingual SER scenarios.
📝 Abstract
We present MERaLiON-SER, a robust speech emotion recognition model designed for English and Southeast Asian languages. The model is trained using a hybrid objective combining weighted categorical cross-entropy and Concordance Correlation Coefficient (CCC) losses for joint discrete and dimensional emotion modelling. This dual approach enables the model to capture both the distinct categories of emotion (like happy or angry) and the fine-grained, such as arousal (intensity), valence (positivity/negativity), and dominance (sense of control), leading to a more comprehensive and robust representation of human affect. Extensive evaluations across multilingual Singaporean languages (English, Chinese, Malay, and Tamil ) and other public benchmarks show that MERaLiON-SER consistently surpasses both open-source speech encoders and large Audio-LLMs. These results underscore the importance of specialised speech-only models for accurate paralinguistic understanding and cross-lingual generalisation. Furthermore, the proposed framework provides a foundation for integrating emotion-aware perception into future agentic audio systems, enabling more empathetic and contextually adaptive multimodal reasoning.