Learning Discriminative Features from Spectrograms Using Center Loss for Speech Emotion Recognition

📅 2019-05-01
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 48
Influential: 6
📄 PDF
🤖 AI Summary
To address the challenges of ambiguous emotional representation and weak feature discriminability in speech emotion recognition (SER), this paper introduces center loss—a metric learning technique—into SER for the first time, proposing a joint optimization framework combining softmax cross-entropy loss and center loss. The method simultaneously enhances inter-class separability and intra-class compactness on variable-length Mel-spectrograms and STFT spectrograms. Leveraging a deep convolutional neural network, it directly learns highly discriminative emotional features from raw spectrograms without handcrafted features. Experiments on standard benchmark datasets demonstrate absolute improvements of 3.2% in unweighted accuracy and 4.1% in weighted accuracy over the softmax-only baseline. This work establishes a novel paradigm for emotion feature learning in SER and empirically validates the effectiveness of metric learning for improving discriminative capability in speech-based affective computing.

Technology Category

Application Category

📝 Abstract
Identifying the emotional state from speech is essential for the natural interaction of the machine with the speaker. However, extracting effective features for emotion recognition is difficult, as emotions are ambiguous. We propose a novel approach to learn discriminative features from variable length spectrograms for emotion recognition by cooperating soft-max cross-entropy loss and center loss together. The soft-max cross-entropy loss enables features from different emotion categories separable, and center loss efficiently pulls the features belonging to the same emotion category to their center. By combining the two losses together, the discriminative power will be highly enhanced, which leads to network learning more effective features for emotion recognition. As demonstrated by the experimental results, after introducing center loss, both the unweighted accuracy and weighted accuracy are improved by over 3% on Mel-spectrogram input, and more than 4% on Short Time Fourier Transform spectrogram input.
Problem

Research questions and friction points this paper is trying to address.

Speaker Emotion Recognition
Feature Extraction
Audio Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

softmax cross-entropy
center loss
affective feature learning
🔎 Similar Papers
No similar papers found.
Dongyang Dai
Dongyang Dai
Unknown affiliation
Speech SynthesisComputational AdvertisingMachine Learning
Z
Zhiyong Wu
Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems, Graduate School at Shenzhen, Tsinghua University, Shenzhen, China; Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing, China; Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China
Runnan Li
Runnan Li
Beijing University of Posts and Telecommunications
Xixin Wu
Xixin Wu
The Chinese University of Hong Kong
J
Jia Jia
Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems, Graduate School at Shenzhen, Tsinghua University, Shenzhen, China; Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing, China
H
H. Meng
Tsinghua-CUHK Joint Research Center for Media Sciences, Technologies and Systems, Graduate School at Shenzhen, Tsinghua University, Shenzhen, China; Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China