CoCoEmo: Composable and Controllable Human-Like Emotional TTS via Activation Steering

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing emotional text-to-speech (TTS) systems, which are often confined to single emotions and struggle to generate natural speech with complex or text-incongruent affective expressions. The authors propose a lightweight activation manipulation framework based on latent directional vectors, enabling disentangled and composable emotional control within a hybrid TTS architecture comprising a linguistic module and a flow-matching module. For the first time, the study systematically validates the linear controllability of emotion in TTS and reveals that emotional prosody is primarily generated by the linguistic module. Evaluated under a multi-rater assessment protocol, the method significantly enhances the diversity, naturalness, and controllability of expressive speech synthesis, supporting both compound emotions and text-emotion mismatch scenarios.

Technology Category

Application Category

📝 Abstract
Emotional expression in human speech is nuanced and compositional, often involving multiple, sometimes conflicting, affective cues that may diverge from linguistic content. In contrast, most expressive text-to-speech systems enforce a single utterance-level emotion, collapsing affective diversity and suppressing mixed or text-emotion-misaligned expression. While activation steering via latent direction vectors offers a promising solution, it remains unclear whether emotion representations are linearly steerable in TTS, where steering should be applied within hybrid TTS architectures, and how such complex emotion behaviors should be evaluated. This paper presents the first systematic analysis of activation steering for emotional control in hybrid TTS models, introducing a quantitative, controllable steering framework, and multi-rater evaluation protocols that enable composable mixed-emotion synthesis and reliable text-emotion mismatch synthesis. Our results demonstrate, for the first time, that emotional prosody and expressive variability are primarily synthesized by the TTS language module instead of the flow-matching module, and also provide a lightweight steering approach for generating natural, human-like emotional speech.
Problem

Research questions and friction points this paper is trying to address.

Emotional TTS
composable emotion
activation steering
text-emotion mismatch
expressive speech synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation steering
emotional TTS
composable emotion
hybrid TTS architecture
text-emotion mismatch
🔎 Similar Papers
No similar papers found.