🤖 AI Summary
This study investigates whether linguistic and cultural background shapes the attentional cognition patterns of vision-language models (VLMs): specifically, whether Japanese-trained VLMs exhibit holistic processing (emphasizing scene-level context) while English-trained VLMs adopt analytic processing (focusing on individual objects). Leveraging cross-cultural cognitive paradigms, we systematically compare image captions generated by Japanese–English bilingual VLMs, complemented by attention visualization and quantitative analysis. Our findings provide the first empirical evidence that VLMs not only acquire linguistic structure but also implicitly internalize culture-specific cognitive preferences embedded in training data. Japanese VLMs demonstrate significantly enhanced global relational modeling, whereas English VLMs prioritize object-centric representations—aligning closely with well-established human cross-cultural cognition patterns. This work reveals language’s role as a cultural carrier profoundly influencing multimodal model cognition architectures, offering both theoretical grounding and an evaluation framework for culturally aware AI modeling.
📝 Abstract
Cross-cultural research in perception and cognition has shown that individuals from different cultural backgrounds process visual information in distinct ways. East Asians, for example, tend to adopt a holistic perspective, attending to contextual relationships, whereas Westerners often employ an analytical approach, focusing on individual objects and their attributes. In this study, we investigate whether Vision-Language Models (VLMs) trained predominantly on different languages, specifically Japanese and English, exhibit similar culturally grounded attentional patterns. Using comparative analysis of image descriptions, we examine whether these models reflect differences in holistic versus analytic tendencies. Our findings suggest that VLMs not only internalize the structural properties of language but also reproduce cultural behaviors embedded in the training data, indicating that cultural cognition may implicitly shape model outputs.