🤖 AI Summary
Current emotional TTS faces two key bottlenecks: (1) discrete emotion labels fail to capture the continuity and complexity of human affect; and (2) the scarcity of large-scale, balanced, fine-grained emotion-annotated speech corpora leads to model overfitting and limited controllability. To address these, we propose a controllable emotional TTS framework integrating discrete labels with dimensional emotion representation—specifically, the Arousal-Dominance-Valence (ADV) space. Our approach unifies discrete and dimensional emotion modeling for the first time; introduces an interpretable linear control mechanism within the ADV space; and employs a semi-supervised multi-source alignment strategy to jointly leverage heterogeneous labeled data. By incorporating ADV-based 3D embeddings, nonlinear quantized encoding, and a neural codec architecture, our method enables continuous, linearly adjustable emotional synthesis along all three dimensions. Experiments demonstrate significant improvements in emotion fidelity and cross-dataset generalization, mitigate few-shot overfitting, and achieve state-of-the-art end-to-end synthesis quality.
📝 Abstract
Recent neural codec language models have made great progress in the field of text-to-speech (TTS), but controllable emotional TTS still faces many challenges. Traditional methods rely on predefined discrete emotion labels to control emotion categories and intensities, which can't capture the complexity and continuity of human emotional perception and expression. The lack of large-scale emotional speech datasets with balanced emotion distributions and fine-grained emotion annotations often causes overfitting in synthesis models and impedes effective emotion control. To address these issues, we propose UDDETTS, a neural codec language model unifying discrete and dimensional emotions for controllable emotional TTS. This model introduces the interpretable Arousal-Dominance-Valence (ADV) space for dimensional emotion description and supports emotion control driven by either discrete emotion labels or nonlinearly quantified ADV values. Furthermore, a semi-supervised training strategy is designed to comprehensively utilize diverse speech datasets with different types of emotion annotations to train the UDDETTS. Experiments show that UDDETTS achieves linear emotion control along the three dimensions of ADV space, and exhibits superior end-to-end emotional speech synthesis capabilities.