Recovering Performance in Speech Emotion Recognition from Discrete Tokens via Multi-Layer Fusion and Paralinguistic Feature Integration

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation in speech emotion recognition (SER) caused by the loss of paralinguistic information during the quantization of discrete speech tokens. To mitigate this issue, the authors propose a multi-layer fusion strategy combined with explicit integration of paralinguistic features: representations from multiple layers of a fine-tuned WavLM-Large model are fused via an attention mechanism and further augmented with acoustic features extracted using openSMILE. This approach effectively recovers semantic and emotional cues compromised in discrete tokenization. Experimental results demonstrate that the proposed method consistently narrows the performance gap between discrete tokens and continuous representations across several state-of-the-art neural audio codecs—including SpeechTokenizer, DAC, and EnCodec—thereby confirming its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Discrete speech tokens offer significant advantages for storage and language model integration, but their application in speech emotion recognition (SER) is limited by paralinguistic information loss during quantization. This paper presents a comprehensive investigation of discrete tokens for SER. Using a fine-tuned WavLM-Large model, we systematically quantify performance degradation across different layer configurations and k-means quantization granularities. To recover the information loss, we propose two key strategies: (1) attention-based multi-layer fusion to recapture complementary information from different layers, and (2) integration of openSMILE features to explicitly reintroduce paralinguistic cues. We also compare mainstream neural codec tokenizers (SpeechTokenizer, DAC, EnCodec) and analyze their behaviors when fused with acoustic features. Our findings demonstrate that through multi-layer fusion and acoustic feature integration, discrete tokens can close the performance gap with continuous representations in SER tasks.
Problem

Research questions and friction points this paper is trying to address.

speech emotion recognition
discrete tokens
paralinguistic information loss
quantization
performance degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

discrete speech tokens
multi-layer fusion
paralinguistic feature integration
speech emotion recognition
WavLM-Large
🔎 Similar Papers
No similar papers found.
E
Esther Sun
Language Technologies Institute, Carnegie Mellon University, USA
Abinay Reddy Naini
Abinay Reddy Naini
Visiting PhD Candidate of Language Technologies Institute - Carnegie Mellon University
Affective ComputingMachine LearningSpeechMultimodal signal processing
C
Carlos Busso
Language Technologies Institute, Carnegie Mellon University, USA