đ€ AI Summary
Clinical speech analysis requires accurate automatic classification of articulatoryâphonemic featuresâsuch as manner, place, and voicingâto advance understanding of speech production mechanisms and enable personalized speech rehabilitation.
Method: We propose a contrastive learningâdriven audiovisual multimodal deep learning framework that jointly models real-time magnetic resonance imaging (rtMRI) and synchronized acoustic signals to learn cross-modal consistent representations.
Contribution/Results: Our approach significantly enhances discriminability along articulatory dimensions compared to unimodal baselines and conventional fusion methods. Evaluated on the USC-TIMIT dataset, it achieves a mean F1-score of 0.81ârepresenting an absolute improvement of 0.23 over the best unimodal baselineâand establishes new state-of-the-art performance. This demonstrates the efficacy and robustness of contrastive-driven cross-modal representation learning for clinically relevant articulatory analysis.
đ Abstract
Accurate classification of articulatory-phonological features plays a vital role in understanding human speech production and developing robust speech technologies, particularly in clinical contexts where targeted phonemic analysis and therapy can improve disease diagnosis accuracy and personalized rehabilitation. In this work, we propose a multimodal deep learning framework that combines real-time magnetic resonance imaging (rtMRI) and speech signals to classify three key articulatory dimensions: manner of articulation, place of articulation, and voicing. We perform classification on 15 phonological classes derived from the aforementioned articulatory dimensions and evaluate the system with four audio/vision configurations: unimodal rtMRI, unimodal audio signals, multimodal middle fusion, and contrastive learning-based audio-vision fusion. Experimental results on the USC-TIMIT dataset show that our contrastive learning-based approach achieves state-of-the-art performance, with an average F1-score of 0.81, representing an absolute increase of 0.23 over the unimodal baseline. The results confirm the effectiveness of contrastive representation learning for multimodal articulatory analysis. Our code and processed dataset will be made publicly available at https://github.com/DaE-plz/AC_Contrastive_Phonology to support future research.