MERaLiON-SER: Robust Speech Emotion Recognition Model for English and SEA Languages

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient cross-lingual robustness of speech emotion recognition (SER) for English and Southeast Asian languages. We propose a multitask speech emotion understanding model that jointly models discrete emotion categories (e.g., happiness, anger) and continuous affective dimensions (arousal, valence, dominance). To unify classification and regression optimization, we introduce a novel hybrid objective function combining weighted cross-entropy loss with Concordance Correlation Coefficient (CCC) loss. The model employs a lightweight speech encoder architecture to enable efficient multilingual representation learning. Evaluated on the Singapore Multilingual Speech Emotion Corpus and multiple public benchmarks, our approach consistently outperforms state-of-the-art open-source speech encoders and large audio foundation models. Results demonstrate superior cross-lingual generalization and fine-grained affective modeling capability, validating both effectiveness and robustness in low-resource multilingual SER scenarios.

Technology Category

Application Category

📝 Abstract
We present MERaLiON-SER, a robust speech emotion recognition model designed for English and Southeast Asian languages. The model is trained using a hybrid objective combining weighted categorical cross-entropy and Concordance Correlation Coefficient (CCC) losses for joint discrete and dimensional emotion modelling. This dual approach enables the model to capture both the distinct categories of emotion (like happy or angry) and the fine-grained, such as arousal (intensity), valence (positivity/negativity), and dominance (sense of control), leading to a more comprehensive and robust representation of human affect. Extensive evaluations across multilingual Singaporean languages (English, Chinese, Malay, and Tamil ) and other public benchmarks show that MERaLiON-SER consistently surpasses both open-source speech encoders and large Audio-LLMs. These results underscore the importance of specialised speech-only models for accurate paralinguistic understanding and cross-lingual generalisation. Furthermore, the proposed framework provides a foundation for integrating emotion-aware perception into future agentic audio systems, enabling more empathetic and contextually adaptive multimodal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Robust speech emotion recognition for English and Southeast Asian languages
Joint discrete and dimensional emotion modelling using hybrid objective
Cross-lingual generalization surpassing speech encoders and Audio-LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid objective combining cross-entropy and CCC losses
Dual approach for discrete and dimensional emotion modelling
Specialised speech-only model for cross-lingual generalisation
🔎 Similar Papers
No similar papers found.
H
Hardik B. Sailor
Institute for Infocomm Research (I2R), A*STAR, Singapore
A
Aw Ai Ti
Institute for Infocomm Research (I2R), A*STAR, Singapore
C
Chen Fang Yih Nancy
Institute for Infocomm Research (I2R), A*STAR, Singapore
C
Chiu Ying Lay
Institute for Infocomm Research (I2R), A*STAR, Singapore
Ding Yang
Ding Yang
Nanjing University
REST APIFuzzing
Y
Yingxu He
Institute for Infocomm Research (I2R), A*STAR, Singapore
Ridong Jiang
Ridong Jiang
Senior Scientist, Institute for Infocomm Research, A*STAR, Singapore
Spoken dialogue systemsNatural language understandingText analysis & information extraction
J
Jingtao Li
Institute for Infocomm Research (I2R), A*STAR, Singapore
J
Jingyi Liao
Institute for Infocomm Research (I2R), A*STAR, Singapore
Zhuohan Liu
Zhuohan Liu
Research Engineer
Y
Yanfeng Lu
Institute for Infocomm Research (I2R), A*STAR, Singapore
M
Ma Yi
Institute for Infocomm Research (I2R), A*STAR, Singapore
Manas Gupta
Manas Gupta
Senior Research Engineer at Agency for Science, Technology & Research (A*STAR), Singapore
Neural Network PruningContinual LearningHebbian Learning & PlasticityBiologically Plausible
M
Muhammad Huzaifah Bin Md Shahrin
Institute for Infocomm Research (I2R), A*STAR, Singapore
N
Nabilah Binte Md Johan
Institute for Infocomm Research (I2R), A*STAR, Singapore
N
Nattadaporn Lertcheva
Institute for Infocomm Research (I2R), A*STAR, Singapore
C
Chunlei Pan
Institute for Infocomm Research (I2R), A*STAR, Singapore
P
Pham Minh Duc
Institute for Infocomm Research (I2R), A*STAR, Singapore
S
Siti Maryam Binte Ahmad Subaidi
Institute for Infocomm Research (I2R), A*STAR, Singapore
S
Siti Umairah Binte Mohammad Salleh
Institute for Infocomm Research (I2R), A*STAR, Singapore
S
Sun Shuo
Institute for Infocomm Research (I2R), A*STAR, Singapore
T
T. K. Vangani
Institute for Infocomm Research (I2R), A*STAR, Singapore
Qiongqiong Wang
Qiongqiong Wang
Lead Research Engineer, Institute for Infocomm Research, A*STAR, Singapore
Deep LearningArtificial IntelligenceMachine Learning
W
Won Cheng Yi Lewis
Institute for Infocomm Research (I2R), A*STAR, Singapore
W
Wong Heng Meng Jeremy
Institute for Infocomm Research (I2R), A*STAR, Singapore
J
Jinyang Wu
Institute for Infocomm Research (I2R), A*STAR, Singapore
H
Huayun Zhang
Institute for Infocomm Research (I2R), A*STAR, Singapore
L
Longyin Zhang
Institute for Infocomm Research (I2R), A*STAR, Singapore
X
Xunlong Zou
Institute for Infocomm Research (I2R), A*STAR, Singapore