🤖 AI Summary
This study investigates the cognitive mechanisms underlying foreign language learners’ inference of unfamiliar word meanings in multimodal picture–sentence contexts. Employing controlled experiments (target-word masking + illustrated sentences), quantitative multimodal feature analysis (image saliency, syntactic/semantic textual cues), and cross-linguistic participant comparisons—integrated with human behavioral analysis and AI reasoning model evaluation—we establish the first multimodal inference framework specifically designed for semantic ambiguity resolution. Results reveal that intuitive cues (e.g., image centrality) weakly predict inference accuracy; instead, verb semantic roles in text and image–object interaction exhibit stronger predictive power. Crucially, native language typology significantly modulates inference strategies. These findings provide empirical foundations for personalized vocabulary instruction and identify key bottlenecks—and corresponding optimization pathways—in AI models’ simulation of human multimodal semantic reasoning.
📝 Abstract
We investigate a new setting for foreign language learning, where learners infer the meaning of unfamiliar words in a multimodal context of a sentence describing a paired image. We conduct studies with human participants using different image-text pairs. We analyze the features of the data (i.e., images and texts) that make it easier for participants to infer the meaning of a masked or unfamiliar word, and what language backgrounds of the participants correlate with success. We find only some intuitive features have strong correlations with participant performance, prompting the need for further investigating of predictive features for success in these tasks. We also analyze the ability of AI systems to reason about participant performance, and discover promising future directions for improving this reasoning ability.