🤖 AI Summary
Current large language models for speech lack fine-grained understanding of affective and paralinguistic cues in user utterances, resulting in responses deficient in empathy. Meanwhile, mainstream empathic modeling approaches rely heavily on large-scale annotated datasets and computationally intensive training, hindering low-resource deployment. To address these limitations, we propose Emotion Omni—a modular architecture that avoids end-to-end large-scale training. It decouples three lightweight components: emotion recognition, context-aware response generation, and open-source TTS-based expressive speech synthesis. Trained exclusively on a self-constructed emotional dialogue dataset of 200K samples, Emotion Omni achieves efficient empathic modeling under constrained resources. Experimental results demonstrate its capability to accurately perceive user emotions and generate natural, diverse, and empathetic spoken responses—significantly enhancing the quality and engagement of human–machine voice interaction.
📝 Abstract
With the development of speech large language models (speech LLMs), users can now interact directly with assistants via speech. However, most existing models simply convert the response content into speech without fully understanding the rich emotional and paralinguistic cues embedded in the user's query. In many cases, the same sentence can have different meanings depending on the emotional expression. Furthermore, emotional understanding is essential for improving user experience in human-machine interaction. Currently, most speech LLMs with empathetic capabilities are trained on massive datasets. This approach requires vast amounts of data and significant computational resources. Therefore, a key challenge lies in how to develop a speech LLM capable of generating empathetic responses with limited data and without the need for large-scale training. To address this challenge, we propose Emotion Omni, a novel model architecture designed to understand the emotional content of user speech input and generate empathetic speech responses. Additionally, we developed a data generation pipeline based on an open-source TTS framework to construct a 200k emotional dialogue dataset, which supports the construction of an empathetic speech assistant. The demos are available at https://w311411.github.io/omni_demo/