🤖 AI Summary
Existing real-time MRI (rtMRI)-based speech synthesis methods suffer from noise corruption and inadequate speaker-specific acoustic modeling, relying on noisy ground-truth speech that compromises intelligibility and cross-speaker generalizability. To address these limitations, we propose the first end-to-end rtMRI-to-speech framework. Our approach: (1) pioneers the adaptation of multimodal self-supervised AV-HuBERT to rtMRI-based text prediction; (2) introduces a flow-based speaker-adaptive duration predictor enabling precise phoneme-level timing estimation without clean speech supervision; and (3) integrates rtMRI video masking analysis with a neural vocoder for robust articulatory-to-acoustic mapping. Evaluated on the USC-TIMIT MRI dataset, our method achieves a word error rate (WER) of 15.18%, substantially outperforming prior state-of-the-art. It further supports zero-shot speaker generalization and controllable voice cloning for arbitrary target timbres.
📝 Abstract
Previous real-time MRI (rtMRI)-based speech synthesis models depend heavily on noisy ground-truth speech. Applying loss directly over ground truth mel-spectrograms entangles speech content with MRI noise, resulting in poor intelligibility. We introduce a novel approach that adapts the multi-modal self-supervised AV-HuBERT model for text prediction from rtMRI and incorporates a new flow-based duration predictor for speaker-specific alignment. The predicted text and durations are then used by a speech decoder to synthesize aligned speech in any novel voice. We conduct thorough experiments on two datasets and demonstrate our method's generalization ability to unseen speakers. We assess our framework's performance by masking parts of the rtMRI video to evaluate the impact of different articulators on text prediction. Our method achieves a $15.18%$ Word Error Rate (WER) on the USC-TIMIT MRI corpus, marking a huge improvement over the current state-of-the-art. Speech samples are available at url{https://mri2speech.github.io/MRI2Speech/}