🤖 AI Summary
This work addresses the significant performance degradation in speech emotion recognition (SER) caused by the loss of paralinguistic information during the quantization of discrete speech tokens. To mitigate this issue, the authors propose a multi-layer fusion strategy combined with explicit integration of paralinguistic features: representations from multiple layers of a fine-tuned WavLM-Large model are fused via an attention mechanism and further augmented with acoustic features extracted using openSMILE. This approach effectively recovers semantic and emotional cues compromised in discrete tokenization. Experimental results demonstrate that the proposed method consistently narrows the performance gap between discrete tokens and continuous representations across several state-of-the-art neural audio codecs—including SpeechTokenizer, DAC, and EnCodec—thereby confirming its effectiveness and generalizability.
📝 Abstract
Discrete speech tokens offer significant advantages for storage and language model integration, but their application in speech emotion recognition (SER) is limited by paralinguistic information loss during quantization. This paper presents a comprehensive investigation of discrete tokens for SER. Using a fine-tuned WavLM-Large model, we systematically quantify performance degradation across different layer configurations and k-means quantization granularities. To recover the information loss, we propose two key strategies: (1) attention-based multi-layer fusion to recapture complementary information from different layers, and (2) integration of openSMILE features to explicitly reintroduce paralinguistic cues. We also compare mainstream neural codec tokenizers (SpeechTokenizer, DAC, EnCodec) and analyze their behaviors when fused with acoustic features. Our findings demonstrate that through multi-layer fusion and acoustic feature integration, discrete tokens can close the performance gap with continuous representations in SER tasks.