A Speech-to-Video Synthesis Approach Using Spatio-Temporal Diffusion for Vocal Tract MRI

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of synthesizing real-time or cine magnetic resonance imaging (RT-/cine-MRI) videos from speech signals to visualize dynamic articulatory tract anatomy during phonation, supporting clinical assessment and personalized rehabilitation. We propose the first spatiotemporal diffusion framework integrating anatomical structure guidance and temporal modeling: leveraging rigorously time-aligned speech–MRI preprocessing, we adapt the Stable Diffusion architecture by incorporating an anatomical structure constraint loss and a temporal attention mechanism. The method significantly improves cross-subject generalization and real-time generation capability for unseen utterances. Validated on data from healthy participants and tongue cancer patients, the synthesized videos exhibit anatomically plausible structures and high motion fidelity. Quantitative metrics and expert clinical evaluation jointly confirm the method’s clinical utility and visualization accuracy.

Technology Category

Application Category

📝 Abstract
Understanding the relationship between vocal tract motion during speech and the resulting acoustic signal is crucial for aided clinical assessment and developing personalized treatment and rehabilitation strategies. Toward this goal, we introduce an audio-to-video generation framework for creating Real Time/cine-Magnetic Resonance Imaging (RT-/cine-MRI) visuals of the vocal tract from speech signals. Our framework first preprocesses RT-/cine-MRI sequences and speech samples to achieve temporal alignment, ensuring synchronization between visual and audio data. We then employ a modified stable diffusion model, integrating structural and temporal blocks, to effectively capture movement characteristics and temporal dynamics in the synchronized data. This process enables the generation of MRI sequences from new speech inputs, improving the conversion of audio into visual data. We evaluated our framework on healthy controls and tongue cancer patients by analyzing and comparing the vocal tract movements in synthesized videos. Our framework demonstrated adaptability to new speech inputs and effective generalization. In addition, positive human evaluations confirmed its effectiveness, with realistic and accurate visualizations, suggesting its potential for outpatient therapy and personalized simulation of vocal tract visualizations.
Problem

Research questions and friction points this paper is trying to address.

Understand vocal tract motion and acoustic signal relationship.
Generate RT-/cine-MRI visuals from speech signals.
Improve audio-to-video conversion for clinical applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-to-video generation for vocal tract MRI
Modified stable diffusion model for synchronization
Realistic vocal tract visualization from speech
🔎 Similar Papers
No similar papers found.
P
Paula Andrea P'erez-Toro
Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, 91058, Bayern, Germany; GITA Lab, Faculty of Engineering, Universidad de Antioquia, Medell´ın, 050010, Antioquia, Colombia
T
Tom'as Arias-Vergara
Harvard Medical School/Massachusetts General Hospital, Boston, 02114, Massachusetts, USA; Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, 91058, Bayern, Germany; GITA Lab, Faculty of Engineering, Universidad de Antioquia, Medell´ın, 050010, Antioquia, Colombia
Fangxu Xing
Fangxu Xing
Harvard Medical School, Massachusetts General Hospital
Image AnalysisArtificial IntelligenceDeep LearningMachine LearningComputer Vision
X
Xiaofeng Liu
Department of Radiology & Biomedical Imaging and Biomedical Informatics & Data Science, Yale University, New Heaven, 06510, Connecticut, USA
Maureen Stone
Maureen Stone
University of Maryland Dental School
tonguespeechMRIultrasound
Jiachen Zhuo
Jiachen Zhuo
Associate Professor, University of Maryland School of Medicine
MRIMedical ImagingTraumatic Brain InjuryMR-guided Focused Ultrasound (MRgFUS)Speech
J
Juan Rafael Orozco-Arroyave
Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, 91058, Bayern, Germany; GITA Lab, Faculty of Engineering, Universidad de Antioquia, Medell´ın, 050010, Antioquia, Colombia
E
Elmar Noth
Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, 91058, Bayern, Germany
Jana Hutter
Jana Hutter
UKER/FAU Erlangen // King's College London
Magnetic Resonance ImagingPerinatal ImagingQuantitative Imaging
Jerry L. Prince
Jerry L. Prince
Professor of Electrical and Computer Engineering, Johns Hopkins University
Medical image analysisbiomedical image analysismedical image computingmedical imagingcomputer vision
A
Andreas Maier
Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg, Erlangen, 91058, Bayern, Germany
Jonghye Woo
Jonghye Woo
Associate Professor of Radiology, Harvard Medical School | MGH
Medical Image AnalysisMedical ImagingComputer VisionMachine LearningSpeech