SigWavNet: Learning Multiresolution Signal Wavelet Network for Speech Emotion Recognition

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor robustness, weak discriminability, and noise sensitivity in raw-waveform modeling for speech emotion recognition (SER), this paper proposes an end-to-end multi-resolution waveform modeling framework. Methodologically, it introduces: (1) learnable wavelet bases coupled with adaptive hard-thresholding denoising for localized time-frequency noise suppression; (2) a CNN-GRU architecture integrating spatio-temporal dual attention, 1D dilated convolutions, and bidirectional gated recurrent units to enhance long-range temporal dependency modeling; and (3) native support for variable-length utterances via direct end-to-end processing—eliminating segmentation and handcrafted feature engineering. Evaluated on IEMOCAP and EMO-DB, the framework achieves significant improvements over state-of-the-art methods, demonstrating superior noise robustness, emotional discriminability, and architectural simplicity.

Technology Category

Application Category

📝 Abstract
In the field of human-computer interaction and psychological assessment, speech emotion recognition (SER) plays an important role in deciphering emotional states from speech signals. Despite advancements, challenges persist due to system complexity, feature distinctiveness issues, and noise interference. This paper introduces a new end-to-end (E2E) deep learning multi-resolution framework for SER, addressing these limitations by extracting meaningful representations directly from raw waveform speech signals. By leveraging the properties of the fast discrete wavelet transform (FDWT), including the cascade algorithm, conjugate quadrature filter, and coefficient denoising, our approach introduces a learnable model for both wavelet bases and denoising through deep learning techniques. The framework incorporates an activation function for learnable asymmetric hard thresholding of wavelet coefficients. Our approach exploits the capabilities of wavelets for effective localization in both time and frequency domains. We then combine one-dimensional dilated convolutional neural networks (1D dilated CNN) with a spatial attention layer and bidirectional gated recurrent units (Bi-GRU) with a temporal attention layer to efficiently capture the nuanced spatial and temporal characteristics of emotional features. By handling variable-length speech without segmentation and eliminating the need for pre or post-processing, the proposed model outperformed state-of-the-art methods on IEMOCAP and EMO-DB datasets. The source code of this paper is shared on the Github repository: https://github.com/alaaNfissi/SigWavNet-Learning-Multiresolution-Signal-Wavelet-Network-for-Speech-Emotion-Recognition.
Problem

Research questions and friction points this paper is trying to address.

Speech Emotion Recognition
Acoustic Complexity
Noise Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

SigWavNet
FDWT
Emotion Recognition
🔎 Similar Papers
No similar papers found.