MRI2Speech: Speech Synthesis from Articulatory Movements Recorded by Real-time MRI

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing real-time MRI (rtMRI)-based speech synthesis methods suffer from noise corruption and inadequate speaker-specific acoustic modeling, relying on noisy ground-truth speech that compromises intelligibility and cross-speaker generalizability. To address these limitations, we propose the first end-to-end rtMRI-to-speech framework. Our approach: (1) pioneers the adaptation of multimodal self-supervised AV-HuBERT to rtMRI-based text prediction; (2) introduces a flow-based speaker-adaptive duration predictor enabling precise phoneme-level timing estimation without clean speech supervision; and (3) integrates rtMRI video masking analysis with a neural vocoder for robust articulatory-to-acoustic mapping. Evaluated on the USC-TIMIT MRI dataset, our method achieves a word error rate (WER) of 15.18%, substantially outperforming prior state-of-the-art. It further supports zero-shot speaker generalization and controllable voice cloning for arbitrary target timbres.

Technology Category

Application Category

📝 Abstract
Previous real-time MRI (rtMRI)-based speech synthesis models depend heavily on noisy ground-truth speech. Applying loss directly over ground truth mel-spectrograms entangles speech content with MRI noise, resulting in poor intelligibility. We introduce a novel approach that adapts the multi-modal self-supervised AV-HuBERT model for text prediction from rtMRI and incorporates a new flow-based duration predictor for speaker-specific alignment. The predicted text and durations are then used by a speech decoder to synthesize aligned speech in any novel voice. We conduct thorough experiments on two datasets and demonstrate our method's generalization ability to unseen speakers. We assess our framework's performance by masking parts of the rtMRI video to evaluate the impact of different articulators on text prediction. Our method achieves a $15.18%$ Word Error Rate (WER) on the USC-TIMIT MRI corpus, marking a huge improvement over the current state-of-the-art. Speech samples are available at url{https://mri2speech.github.io/MRI2Speech/}
Problem

Research questions and friction points this paper is trying to address.

Real-time MRI
Speech Synthesis
Voice Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

AV-HuBERT Model
Real-time MRI
Speech Synthesis
🔎 Similar Papers
No similar papers found.
N
Neil Shah
CVIT, IIIT Hyderabad, India
A
Ayan Kashyap
CVIT, IIIT Hyderabad, India
S
S. Karande
TCS Research Pune, India
Vineet Gandhi
Vineet Gandhi
Associate Professor at IIIT Hyderabad
Creative AIApplied Machine LearningMultimedia