🤖 AI Summary
Cross-lingual emotional text-to-speech (TTS) requires simultaneous modeling of source-language emotion and target-language speaker identity, yet these attributes are highly entangled in speech, impeding fine-grained disentanglement. To address this, we propose EMM-TTS, a two-stage framework: first, emotion transfer is achieved via perturbed self-supervised representations; second, target speaker identity is preserved through explicit acoustic modeling—incorporating F0, energy, duration, and formants—combined with an adaptive normalization module. We further introduce a speaker-consistency loss and anonymized perturbation strategy to enhance cross-lingual emotion transferability and voice stability. Extensive evaluations on multilingual benchmarks demonstrate that EMM-TTS significantly outperforms state-of-the-art methods in both objective metrics and subjective listening tests, achieving substantial improvements in naturalness, emotional fidelity, and speaker identity consistency.
📝 Abstract
Cross-lingual emotional text-to-speech (TTS) aims to produce speech in one language that captures the emotion of a speaker from another language while maintaining the target voice's timbre. This process of cross-lingual emotional speech synthesis presents a complex challenge, necessitating flexible control over emotion, timbre, and language. However, emotion and timbre are highly entangled in speech signals, making fine-grained control challenging. To address this issue, we propose EMM-TTS, a novel two-stage cross-lingual emotional speech synthesis framework based on perturbed self-supervised learning (SSL) representations. In the first stage, the model explicitly and implicitly encodes prosodic cues to capture emotional expressiveness, while the second stage restores the timbre from perturbed SSL representations. We further investigate the effect of different speaker perturbation strategies-formant shifting and speaker anonymization-on the disentanglement of emotion and timbre. To strengthen speaker preservation and expressive control, we introduce Speaker Consistency Loss (SCL) and Speaker-Emotion Adaptive Layer Normalization (SEALN) modules. Additionally, we find that incorporating explicit acoustic features (e.g., F0, energy, and duration) alongside pretrained latent features improves voice cloning performance. Comprehensive multi-metric evaluations, including both subjective and objective measures, demonstrate that EMM-TTS achieves superior naturalness, emotion transferability, and timbre consistency across languages.