Continuous-Token Diffusion for Speaker-Referenced TTS in Multimodal LLMs

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MLLM-based TTS approaches rely on discrete speech tokens, neglecting speech continuity and consequently suffering fine-grained acoustic information loss. To address this, we propose the first end-to-end MLLM-TTS framework explicitly designed for continuous speech representation. Our method introduces a dual-head architecture—comprising a language modeling head and a frame-level diffusion head—along with a novel continuous-token diffusion mechanism. We adopt a two-stage training strategy: first freezing the language model head to optimize the diffusion head, then jointly fine-tuning both heads; masked training is further incorporated to mitigate exposure bias. Evaluated on LibriSpeech test-clean, our approach achieves a word error rate (WER) of 1.95%—a 46% relative reduction over single-stage training—along with a speaker similarity score of 0.54 and an UTMOS of 4.00. To the best of our knowledge, this is the first work to realize high-fidelity, controllable continuous speech synthesis within the MLLM paradigm.

Technology Category

Application Category

📝 Abstract
Unified architectures in multimodal large language models (MLLM) have shown promise in handling diverse tasks within a single framework. In the text-to-speech (TTS) task, current MLLM-based approaches rely on discrete token representations, which disregard the inherently continuous nature of speech and can lead to loss of fine-grained acoustic information.In this work, we investigate the TTS within the MLLM paradigm using continuous speech representations. We design a dual-head architecture and implement two complementary training strategies for a robust model. (1) A diffusion head generating continuous speech representations is added on the MLLM, which is on frame-level and strictly autoregressive. (2) The original language model head is retained to preserve multitask capability and to control the start and end of speech synthesis. (3) Masked training is employed to address exposure bias in autoregressive decoding. (4) To stabilize optimization, we propose a two-stage scheme where the LM is frozen in the second stage, ensuring the diffusion head learns from a fixed input distribution. Evaluations on LibriSpeech(PC) test-clean show that our approach achieves state-of-the-art autoregressive performance, with a WER of 1.95%, speaker similarity of 0.54, and UTMOS of 4.00. The two-stage training yields a 46% relative WER reduction over the one-stage training baseline. These results highlight the effectiveness of combining autoregressive modeling with continuous-token diffusion, supported by a two-stage training procedure.
Problem

Research questions and friction points this paper is trying to address.

Addressing loss of acoustic detail in text-to-speech systems using continuous representations
Improving speaker similarity and speech quality in multimodal language models
Stabilizing training for autoregressive speech synthesis with diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous speech representations replace discrete tokens
Dual-head architecture with diffusion and language model heads
Two-stage training stabilizes optimization and improves performance
🔎 Similar Papers
No similar papers found.