Emotion Omni: Enabling Empathetic Speech Response Generation through Large Language Models

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models for speech lack fine-grained understanding of affective and paralinguistic cues in user utterances, resulting in responses deficient in empathy. Meanwhile, mainstream empathic modeling approaches rely heavily on large-scale annotated datasets and computationally intensive training, hindering low-resource deployment. To address these limitations, we propose Emotion Omni—a modular architecture that avoids end-to-end large-scale training. It decouples three lightweight components: emotion recognition, context-aware response generation, and open-source TTS-based expressive speech synthesis. Trained exclusively on a self-constructed emotional dialogue dataset of 200K samples, Emotion Omni achieves efficient empathic modeling under constrained resources. Experimental results demonstrate its capability to accurately perceive user emotions and generate natural, diverse, and empathetic spoken responses—significantly enhancing the quality and engagement of human–machine voice interaction.

Technology Category

Application Category

📝 Abstract
With the development of speech large language models (speech LLMs), users can now interact directly with assistants via speech. However, most existing models simply convert the response content into speech without fully understanding the rich emotional and paralinguistic cues embedded in the user's query. In many cases, the same sentence can have different meanings depending on the emotional expression. Furthermore, emotional understanding is essential for improving user experience in human-machine interaction. Currently, most speech LLMs with empathetic capabilities are trained on massive datasets. This approach requires vast amounts of data and significant computational resources. Therefore, a key challenge lies in how to develop a speech LLM capable of generating empathetic responses with limited data and without the need for large-scale training. To address this challenge, we propose Emotion Omni, a novel model architecture designed to understand the emotional content of user speech input and generate empathetic speech responses. Additionally, we developed a data generation pipeline based on an open-source TTS framework to construct a 200k emotional dialogue dataset, which supports the construction of an empathetic speech assistant. The demos are available at https://w311411.github.io/omni_demo/
Problem

Research questions and friction points this paper is trying to address.

Enabling empathetic speech responses with limited data
Understanding emotional cues in user speech queries
Generating empathetic speech without large-scale training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel model architecture for empathetic speech understanding
Data generation pipeline with TTS framework
Emotional dialogue dataset construction for limited data training
🔎 Similar Papers
No similar papers found.
H
Haoyu Wang
Zhejiang University
G
Guangyan Zhang
LIGHTSPEED
J
Jiale Chen
Zhejiang University
J
Jingyu Li
LIGHTSPEED
Y
Yuehai Wang
Zhejiang University
Yiwen Guo
Yiwen Guo
Research Scientist
Machine LearningDeep LearningImage Processing