Perturbation Self-Supervised Representations for Cross-Lingual Emotion TTS: Stage-Wise Modeling of Emotion and Speaker

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-lingual emotional text-to-speech (TTS) requires simultaneous modeling of source-language emotion and target-language speaker identity, yet these attributes are highly entangled in speech, impeding fine-grained disentanglement. To address this, we propose EMM-TTS, a two-stage framework: first, emotion transfer is achieved via perturbed self-supervised representations; second, target speaker identity is preserved through explicit acoustic modeling—incorporating F0, energy, duration, and formants—combined with an adaptive normalization module. We further introduce a speaker-consistency loss and anonymized perturbation strategy to enhance cross-lingual emotion transferability and voice stability. Extensive evaluations on multilingual benchmarks demonstrate that EMM-TTS significantly outperforms state-of-the-art methods in both objective metrics and subjective listening tests, achieving substantial improvements in naturalness, emotional fidelity, and speaker identity consistency.

Technology Category

Application Category

📝 Abstract
Cross-lingual emotional text-to-speech (TTS) aims to produce speech in one language that captures the emotion of a speaker from another language while maintaining the target voice's timbre. This process of cross-lingual emotional speech synthesis presents a complex challenge, necessitating flexible control over emotion, timbre, and language. However, emotion and timbre are highly entangled in speech signals, making fine-grained control challenging. To address this issue, we propose EMM-TTS, a novel two-stage cross-lingual emotional speech synthesis framework based on perturbed self-supervised learning (SSL) representations. In the first stage, the model explicitly and implicitly encodes prosodic cues to capture emotional expressiveness, while the second stage restores the timbre from perturbed SSL representations. We further investigate the effect of different speaker perturbation strategies-formant shifting and speaker anonymization-on the disentanglement of emotion and timbre. To strengthen speaker preservation and expressive control, we introduce Speaker Consistency Loss (SCL) and Speaker-Emotion Adaptive Layer Normalization (SEALN) modules. Additionally, we find that incorporating explicit acoustic features (e.g., F0, energy, and duration) alongside pretrained latent features improves voice cloning performance. Comprehensive multi-metric evaluations, including both subjective and objective measures, demonstrate that EMM-TTS achieves superior naturalness, emotion transferability, and timbre consistency across languages.
Problem

Research questions and friction points this paper is trying to address.

Disentangling emotion and timbre in cross-lingual speech synthesis
Achieving fine-grained control over emotion, timbre, and language
Preserving speaker identity while transferring emotional expressiveness across languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework disentangles emotion and timbre
Perturbed SSL representations with speaker anonymization
Speaker-Emotion Adaptive Layer Normalization modules
🔎 Similar Papers
No similar papers found.
C
Cheng Gong
Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
Chunyu Qiang
Chunyu Qiang
Kuaishou Technology; TJU; CASIA
Speech Synthesis
Tianrui Wang
Tianrui Wang
Tianjin University
Speech Signal Processing
Y
Yu Jiang
Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
Yuheng Lu
Yuheng Lu
Peking University
3D Computer Vision
R
Ruihao Jing
Institute of Artificial Intelligence (TeleAI), China Telecom, China
Xiaoxiao Miao
Xiaoxiao Miao
Duke Kunshan University
Speech PrivacySpeaker and Language IdentificationSpeech Synthesis
X
Xiaolei Zhang
Institute of Artificial Intelligence (TeleAI), China Telecom, China; Northwestern Polytechnical University
Longbiao Wang
Longbiao Wang
Professor, Tianjin University
Speech ProcessingSpeech recognitionspeaker recognitionacoustic signal processingspeech enhancement
Jianwu Dang
Jianwu Dang
JAIST, Japan / Tianjin Univ., China
Speech Sciencespeech productionEEGdisorder speech