🤖 AI Summary
This work addresses the visual text-to-speech (V-TTS) task, aiming to synthesize high-fidelity speech that is speaker-cloneable, semantically faithful to input text, and strictly synchronized with lip movements in the input video. To this end, we propose VSpeechLM, a vision-speech language model. First, a text-video alignment module establishes phoneme-level lip-motion–speech correspondence, generating an extended phoneme sequence enriched with temporal synchronization cues. Second, a multimodal decoder—built upon a speech large language model—fuses cross-modal information via joint text-video embeddings and phoneme-level alignment. Evaluated on benchmarks including LRS3, our method achieves state-of-the-art performance across all three key metrics: speech quality (MOS), speaker similarity (SIM), and lip-sync error (LSE). Notably, it is the first approach to simultaneously achieve high naturalness, high-fidelity voice cloning, and frame-accurate lip synchronization.
📝 Abstract
The task of Visual Text-to-Speech (VisualTTS), also known as video dubbing, aims to generate speech synchronized with the lip movements in an input video, in additional to being consistent with the content of input text and cloning the timbre of a reference speech. Existing VisualTTS models typically adopt lightweight architectures and design specialized modules to achieve the above goals respectively, yet the speech quality is not satisfied due to the model capacity and the limited data in VisualTTS. Recently, speech large language models (SpeechLLM) show the robust ability to generate high-quality speech. But few work has been done to well leverage temporal cues from video input in generating lip-synchronized speech. To generate both high-quality and lip-synchronized speech in VisualTTS tasks, we propose a novel Visual Speech Language Model called VSpeechLM based upon a SpeechLLM. To capture the synchronization relationship between text and video, we propose a text-video aligner. It first learns fine-grained alignment between phonemes and lip movements, and then outputs an expanded phoneme sequence containing lip-synchronization cues. Next, our proposed SpeechLLM based decoders take the expanded phoneme sequence as input and learns to generate lip-synchronized speech. Extensive experiments demonstrate that our VSpeechLM significantly outperforms previous VisualTTS methods in terms of overall quality, speaker similarity, and synchronization metrics.