Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Existing emotional TTS systems are constrained by discrete emotion labels and sparse annotation, limiting their ability to capture the continuity and complexity of human affect. This paper proposes the first method to seamlessly integrate the psychological PAD (Pleasure-Arousal-Dominance) three-dimensional emotion model into a language-model-driven TTS framework—enabling unsupervised disentanglement and learning of continuous emotional styles directly from expressive speech, without requiring explicit emotion labels. Key innovations include: (1) a classification-based emotion dimension predictor trained on labeled speech data, and (2) an end-to-end LM-TTS architecture jointly modeling linguistic and psychometric representations. Experiments demonstrate that our approach significantly improves emotional naturalness and spectral coverage of synthesized speech under zero-shot emotion-label supervision. Both objective metrics (e.g., F0 variance, spectral contrast) and subjective MOS scores surpass those of state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Current emotional text-to-speech systems face challenges in conveying the full spectrum of human emotions, largely due to the inherent complexity of human emotions and the limited range of emotional labels in existing speech datasets. To address these limitations, this paper introduces a TTS framework that provides flexible user control over three emotional dimensions - pleasure, arousal, and dominance - enabling the synthesis of a diverse array of emotional styles. The framework leverages an emotional dimension predictor, trained soley on categorical labels from speech data and grounded in earlier psychological research, which is seamlessly integrated into a language model-based TTS system. Experimental results demonstrates that the proposed framework effectively learns emotional styles from expressive speech, eliminating the need for explicit emotion labels during TTS training, while enhancing the naturalness and diversity of synthesized emotional speech.
Problem

Research questions and friction points this paper is trying to address.

Control emotional dimensions in TTS for diverse human emotions
Overcome limited emotional labels in current speech datasets
Enhance naturalness and diversity of synthesized emotional speech
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible control over pleasure, arousal, dominance
Emotional dimension predictor trained on categorical labels
Language model-based TTS without explicit emotion labels
🔎 Similar Papers
No similar papers found.
K
Kun Zhou
Alibaba Group, Singapore
Y
You Zhang
University of Rochester, United States of America
Shengkui Zhao
Shengkui Zhao
Senior Algorithm Expert, Alibaba Group
Speech processing and large models
H
Hao Wang
Alibaba Group, Singapore
Z
Zexu Pan
Alibaba Group, Singapore
Dianwen Ng
Dianwen Ng
MiroMind, Alibaba-NTU Singapore Joint Research Institute
Artificial IntelligenceDeep LearningSpeech RecognitionSelf-supervised Learning
C
Chong Zhang
Alibaba Group, Singapore
C
Chongjia Ni
Alibaba Group, Singapore
Yukun Ma
Yukun Ma
Alibaba Group
ASRSLU
T
Trung Hieu Nguyen
Alibaba Group, Singapore
J
J. Yip
Alibaba Group, Singapore
B
Bin Ma
Alibaba Group, Singapore