🤖 AI Summary
This study systematically investigates, for the first time, whether vision-language models (VLMs) possess an intrinsic understanding of the physiological mechanisms underlying vowel articulation from tongue-position imagery. To this end, we construct a synchronized tongue-motion video–speech dataset derived from real-time MRI recordings and design few-shot and zero-shot prompting tasks to evaluate VLMs’ ability to establish tongue-position–vowel mappings—with or without reference examples. Results show that VLMs achieve strong performance in few-shot settings but suffer substantial degradation in zero-shot settings, indicating a lack of internalized phonetic-physiological knowledge and heavy reliance on external exemplars for cross-modal reasoning. This work fills a critical gap in evaluating VLMs’ capacity for speech physiology comprehension and reveals a fundamental limitation in current multimodal models’ embodied phonetic cognition. The code and dataset are publicly released.
📝 Abstract
Vowels are primarily characterized by tongue position. Humans have discovered these features of vowel articulation through their own experience and explicit objective observation such as using MRI. With this knowledge and our experience, we can explain and understand the relationship between tongue positions and vowels, and this knowledge is helpful for language learners to learn pronunciation. Since language models (LMs) are trained on a large amount of data that includes linguistic and medical fields, our preliminary studies indicate that an LM is able to explain the pronunciation mechanisms of vowels. However, it is unclear whether multi-modal LMs, such as vision LMs, align textual information with visual information. One question arises: do LMs associate real tongue positions with vowel articulation? In this study, we created video and image datasets from the existing real-time MRI dataset and investigated whether LMs can understand vowel articulation based on tongue positions using vision-based information. Our findings suggest that LMs exhibit potential for understanding vowels and tongue positions when reference examples are provided while they have difficulties without them. Our code for dataset building is available on GitHub.