🤖 AI Summary
Existing emotional speech synthesis approaches struggle to accurately model emotion-specific latent characteristics, often resulting in unnatural or poorly controllable emotional expression. This work proposes EmoShift, a framework that employs a lightweight EmoSteer layer to learn emotion-specific offset vectors within the output embedding space. By introducing only approximately 10 million additional parameters, EmoShift efficiently captures semantic shifts associated with emotion. The method enables fine-grained control over emotional intensity without requiring extensive fine-tuning, preserving both speech naturalness and speaker identity. Experimental results demonstrate that EmoShift consistently outperforms zero-shot transfer and full-parameter fine-tuning baselines, achieving more stable and contextually appropriate emotional expression in both objective metrics and subjective evaluations.
📝 Abstract
Achieving precise and controllable emotional expression is crucial for producing natural and context-appropriate speech in text-to-speech (TTS) synthesis. However, many emotion-aware TTS systems, including large language model (LLM)-based designs, rely on scaling fixed emotion embeddings or external guidance, limiting their ability to model emotion-specific latent characteristics. To address this gap, we present EmoShift, a lightweight activation-steering framework incorporating a EmoSteer layer, which learns a steering vector for each target emotion in the output embedding space to capture its latent offset and maintain stable, appropriate expression across utterances and categories. With only 10M trainable parameters,less than 1/30 of full fine-tuning, EmoShift outperforms zero-shot and fully fine-tuned baselines in objective and subjective evaluations, enhancing emotional expressiveness while preserving naturalness and speaker similarity. Further analysis confirms the proposed EmoSteer layer's effectiveness and reveals its potential for controllable emotional intensity in speech synthesis.