Dual Information Speech Language Models for Emotional Conversations

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current speech-language models (SLMs) struggle to effectively model paralinguistic cues—such as prosody and rhythm—in emotional dialogue, while direct extension of frozen large language models (LLMs) degrades contextual understanding. To address this, we propose a dual heterogeneous adapter architecture that explicitly disentangles linguistic and paralinguistic representations in speech. Our method employs weakly supervised training combined with a controlled stochasticity mechanism, enabling parameter-efficient fine-tuning on general-purpose speech data—without introducing task-specific embeddings—thus preserving contextual coherence and cross-task generalization. Experiments demonstrate state-of-the-art performance on emotional dialogue tasks. Notably, ours is the first approach to achieve joint yet disentangled modeling of paralinguistic and semantic information. The framework achieves significant gains in both data and parameter efficiency, offering a scalable and robust solution for emotion-aware speech-language modeling.

Technology Category

Application Category

📝 Abstract
Conversational systems relying on text-based large language models (LLMs) often overlook paralinguistic cues, essential for understanding emotions and intentions. Speech-language models (SLMs), which use speech as input, are emerging as a promising solution. However, SLMs built by extending frozen LLMs struggle to capture paralinguistic information and exhibit reduced context understanding. We identify entangled information and improper training strategies as key issues. To address these issues, we propose two heterogeneous adapters and suggest a weakly supervised training strategy. Our approach disentangles paralinguistic and linguistic information, enabling SLMs to interpret speech through structured representations. It also preserves contextual understanding by avoiding the generation of task-specific vectors through controlled randomness. This approach trains only the adapters on common datasets, ensuring parameter and data efficiency. Experiments demonstrate competitive performance in emotional conversation tasks, showcasing the model's ability to effectively integrate both paralinguistic and linguistic information within contextual settings.
Problem

Research questions and friction points this paper is trying to address.

SLMs fail to capture paralinguistic cues in speech
Entangled information reduces context understanding in SLMs
Improper training strategies hinder SLM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses two heterogeneous adapters for disentanglement
Applies weakly supervised training strategy
Preserves context with controlled randomness
🔎 Similar Papers
No similar papers found.