๐ค AI Summary
Mandarin lip-to-speech (L2S) synthesis faces two key challenges: complex visemeโphoneme mapping and the critical influence of lexical tone on intelligibility. To address these, we propose a tone-aware cross-lingual transfer generative model. First, we leverage English pre-trained audiovisual self-supervised models (e.g., Wav2Vec 2.0) for cross-lingual knowledge transfer to mitigate the scarcity of paired Mandarin lip-video data. Second, we incorporate discrete speech units derived from ASR fine-tuning as strong linguistic priors to explicitly guide fundamental frequency (F0) contour modeling, enabling accurate tone synthesis. Third, we integrate flow matching with a two-stage training paradigm to enhance speech naturalness. Experiments demonstrate that our method significantly outperforms existing state-of-the-art approaches, achieving a 28.3% reduction in word error rate (WER), a 19.7% improvement in tone accuracy, and a 0.65-point gain in Mean Opinion Score (MOS) for prosodic naturalness.
๐ Abstract
Lip-to-speech (L2S) synthesis for Mandarin is a significant challenge, hindered by complex viseme-to-phoneme mappings and the critical role of lexical tones in intelligibility. To address this issue, we propose Lexical Tone-Aware Lip-to-Speech (LTA-L2S). To tackle viseme-to-phoneme complexity, our model adapts an English pre-trained audio-visual self-supervised learning (SSL) model via a cross-lingual transfer learning strategy. This strategy not only transfers universal knowledge learned from extensive English data to the Mandarin domain but also circumvents the prohibitive cost of training such a model from scratch. To specifically model lexical tones and enhance intelligibility, we further employ a flow-matching model to generate the F0 contour. This generation process is guided by ASR-fine-tuned SSL speech units, which contain crucial suprasegmental information. The overall speech quality is then elevated through a two-stage training paradigm, where a flow-matching postnet refines the coarse spectrogram from the first stage. Extensive experiments demonstrate that LTA-L2S significantly outperforms existing methods in both speech intelligibility and tonal accuracy.