🤖 AI Summary
Current conversational speech synthesis (CSS) systems predominantly rely on deterministic modeling, limiting their ability to simultaneously achieve response diversity, contextual coherence, and expressive emotional prosody, while lacking large language model (LLM)-driven end-to-end architectures. To address these limitations, we propose the first LLM-driven, diffusion-model-enhanced, context-aware CSS framework. Our method comprises: (1) a diffusion-based stochastic prosody predictor enabling controllable, multimodal dialogue-conditioned prosody sampling; and (2) the first language-model-based, explicitly prosody-controllable end-to-end CSS system. Experiments demonstrate substantial improvements over existing CSS models in speech diversity, contextual consistency, and emotional naturalness. The framework achieves state-of-the-art performance on both objective metrics (e.g., MCD, F0 RMSE) and subjective MOS evaluations, validating its effectiveness in generating high-fidelity, contextually grounded, and emotionally expressive speech.
📝 Abstract
Conversational speech synthesis (CSS) aims to synthesize both contextually appropriate and expressive speech, and considerable efforts have been made to enhance the understanding of conversational context. However, existing CSS systems are limited to deterministic prediction, overlooking the diversity of potential responses. Moreover, they rarely employ language model (LM)-based TTS backbones, limiting the naturalness and quality of synthesized speech. To address these issues, in this paper, we propose DiffCSS, an innovative CSS framework that leverages diffusion models and an LM-based TTS backbone to generate diverse, expressive, and contextually coherent speech. A diffusion-based context-aware prosody predictor is proposed to sample diverse prosody embeddings conditioned on multimodal conversational context. Then a prosody-controllable LM-based TTS backbone is developed to synthesize high-quality speech with sampled prosody embeddings. Experimental results demonstrate that the synthesized speech from DiffCSS is more diverse, contextually coherent, and expressive than existing CSS systems