MPE-TTS: Customized Emotion Zero-Shot Text-To-Speech Using Multi-Modal Prompt

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing zero-shot text-to-speech (TTS) systems rely on single-modality prompts (e.g., text or speech), limiting flexible emotional voice customization. This work proposes the first zero-shot emotional TTS framework supporting tri-modal emotional prompting—text, image, and speech—enabling fine-grained control via disentangled modeling of linguistic content, speaker identity, emotion, and prosody. Key contributions include: (1) a multimodal emotional prompt encoder that maps heterogeneous inputs into a unified emotional representation space; (2) a prosody predictor coupled with an emotion-consistency loss to ensure cross-modal emotional fidelity; and (3) integration of a diffusion-based acoustic model with emotion-disentangled representation learning. Experiments demonstrate significant improvements over state-of-the-art zero-shot TTS systems in naturalness (MOS gain ≥ 0.4) and speaker similarity, while enabling real-time, emotion-controllable synthesis.

Technology Category

Application Category

📝 Abstract
Most existing Zero-Shot Text-To-Speech(ZS-TTS) systems generate the unseen speech based on single prompt, such as reference speech or text descriptions, which limits their flexibility. We propose a customized emotion ZS-TTS system based on multi-modal prompt. The system disentangles speech into the content, timbre, emotion and prosody, allowing emotion prompts to be provided as text, image or speech. To extract emotion information from different prompts, we propose a multi-modal prompt emotion encoder. Additionally, we introduce an prosody predictor to fit the distribution of prosody and propose an emotion consistency loss to preserve emotion information in the predicted prosody. A diffusion-based acoustic model is employed to generate the target mel-spectrogram. Both objective and subjective experiments demonstrate that our system outperforms existing systems in terms of naturalness and similarity. The samples are available at https://mpetts-demo.github.io/mpetts_demo/.
Problem

Research questions and friction points this paper is trying to address.

Enables emotion customization in zero-shot TTS using multi-modal prompts
Disentangles speech into content, timbre, emotion, and prosody components
Improves emotion preservation and naturalness over single-prompt systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal prompt emotion encoder
Prosody predictor with emotion consistency
Diffusion-based acoustic mel-spectrogram generation
Z
Zhichao Wu
Nanjing University of Aeronautics and Astronautics, China
Y
Yueteng Kang
Youtu Lab, Tencent, China
Songjun Cao
Songjun Cao
Tencent
speech understandingspeech generationmulti-modalLLM
Long Ma
Long Ma
Dalian University of Technology
Computer VisionImage Processing
Q
Qiulin Li
Nanjing University of Aeronautics and Astronautics, China
Q
Qun Yang
Nanjing University of Aeronautics and Astronautics, China