🤖 AI Summary
This study addresses the significant performance degradation in speaker emotion recognition caused by reverberation and speech overlap in complex acoustic environments such as cocktail parties. To tackle this challenge, the authors propose a novel framework that integrates multi-channel speech enhancement with self-supervised audio-visual feature learning. The front-end employs DNN-based weighted prediction error (DNN-WPE) and mask-guided minimum variance distortionless response (MVDR) beamforming for dereverberation and target speaker separation. The back-end jointly leverages HuBERT and Vision Transformer (ViT) to extract and fuse speech and visual features. Notably, this work is the first to incorporate multi-channel speech enhancement into a self-supervised audio-visual emotion recognition pipeline, achieving substantial gains under cross-domain zero-shot settings. On a combined IEMOCAP and MSP-FACE dataset, the method yields absolute improvements of 9.5%, 8.5%, and 9.1% in weighted accuracy, unweighted accuracy, and F1 score, respectively—corresponding to relative gains of 17.1%, 14.7%, and 16.0%.
📝 Abstract
This paper highlights the critical importance of multi-channel speech enhancement (MCSE) for speech emotion recognition (ER) in cocktail party scenarios. A multi-channel speech dereverberation and separation front-end integrating DNN-WPE and mask-based MVDR is used to extract the target speaker's speech from the mixture speech, before being fed into the downstream ER back-end using HuBERT- and ViT-based speech and visual features. Experiments on mixture speech constructed using the IEMOCAP and MSP-FACE datasets suggest the MCSE output consistently outperforms domain fine-tuned single-channel speech representations produced by: a) Conformer-based metric GANs; and b) WavLM SSL features with optional SE-ER dual task fine-tuning. Statistically significant increases in weighted, unweighted accuracy and F1 measures by up to 9.5%, 8.5% and 9.1% absolute (17.1%, 14.7% and 16.0% relative) are obtained over the above single-channel baselines. The generalization of IEMOCAP trained MCSE front-ends are also shown when being zero-shot applied to out-of-domain MSP-FACE data.