🤖 AI Summary
Accurate segmentation of articulatory structures in real-time magnetic resonance imaging (rtMRI) is hindered by reliance on unimodal visual modeling, lacking guidance from acoustic and phonetic priors.
Method: We propose the first multimodal collaborative framework for rtMRI vocal tract segmentation, jointly encoding video, audio, and phonetic features. It employs cross-modal attention for dynamic feature alignment and integrates contrastive learning to enhance representation robustness. Crucially, the framework maintains stable performance under modality dropout (e.g., audio-absent inference), improving generalizability.
Contribution/Results: Evaluated on the USC-75 subset, our method achieves a Dice score of 0.95 and a 95th-percentile Hausdorff distance (HD₉₅) of 4.20 mm—surpassing all single- and multi-modal baselines. This work pioneers the joint optimization of cross-modal alignment and contrastive learning for rtMRI segmentation, establishing a new paradigm for high-precision, robust dynamic anatomical analysis.
📝 Abstract
Accurately segmenting articulatory structures in real-time magnetic resonance imaging (rtMRI) remains challenging, as most existing methods rely almost entirely on visual cues. Yet synchronized acoustic and phonological signals provide complementary context that can enrich visual information and improve precision. In this paper, we introduce VocSegMRI, a multimodal framework that integrates video, audio, and phonological inputs through cross-attention fusion for dynamic feature alignment. To further enhance cross-modal representation, we incorporate a contrastive learning objective that improves segmentation performance even when the audio modality is unavailable at inference. Evaluated on a sub-set of USC-75 rtMRI dataset, our approach achieves state-of-the-art performance, with a Dice score of 0.95 and a 95th percentile Hausdorff Distance (HD_95) of 4.20 mm, outperforming both unimodal and multimodal baselines. Ablation studies confirm the contributions of cross-attention and contrastive learning to segmentation precision and robustness. These results highlight the value of integrative multimodal modeling for accurate vocal tract analysis.