🤖 AI Summary
Existing emotional TTS systems are constrained by discrete emotion labels and sparse annotation, limiting their ability to capture the continuity and complexity of human affect. This paper proposes the first method to seamlessly integrate the psychological PAD (Pleasure-Arousal-Dominance) three-dimensional emotion model into a language-model-driven TTS framework—enabling unsupervised disentanglement and learning of continuous emotional styles directly from expressive speech, without requiring explicit emotion labels. Key innovations include: (1) a classification-based emotion dimension predictor trained on labeled speech data, and (2) an end-to-end LM-TTS architecture jointly modeling linguistic and psychometric representations. Experiments demonstrate that our approach significantly improves emotional naturalness and spectral coverage of synthesized speech under zero-shot emotion-label supervision. Both objective metrics (e.g., F0 variance, spectral contrast) and subjective MOS scores surpass those of state-of-the-art baselines.
📝 Abstract
Current emotional text-to-speech systems face challenges in conveying the full spectrum of human emotions, largely due to the inherent complexity of human emotions and the limited range of emotional labels in existing speech datasets. To address these limitations, this paper introduces a TTS framework that provides flexible user control over three emotional dimensions - pleasure, arousal, and dominance - enabling the synthesis of a diverse array of emotional styles. The framework leverages an emotional dimension predictor, trained soley on categorical labels from speech data and grounded in earlier psychological research, which is seamlessly integrated into a language model-based TTS system. Experimental results demonstrates that the proposed framework effectively learns emotional styles from expressive speech, eliminating the need for explicit emotion labels during TTS training, while enhancing the naturalness and diversity of synthesized emotional speech.