🤖 AI Summary
This work addresses critical limitations of Large Vision-Language Models (LVLMs) in global cultural knowledge evaluation—namely, Western-centric bias, narrow cultural coverage, and task monotony. To this end, we introduce Cultura-144, the first multimodal, multi-task cultural knowledge benchmark spanning 144 countries across six macro-regions. Methodologically, we define 728 fine-grained cultural elements, design six novel cross-modal tasks, and integrate geographic metadata augmentation with a standardized evaluation framework to systematically assess 20 LVLMs and 11 LLMs. Key contributions include: (1) the first global-scale cultural coverage and comprehensive model evaluation across the full spectrum of modern vision-language and language-only models; and (2) empirical findings revealing pronounced Western bias, superior performance on concrete versus abstract cultural concepts, substantial gains from multimodal and geographic cues, and a strong positive correlation between parameter count and cultural understanding capability.
📝 Abstract
Large Vision-Language Models (LVLMs) have recently gained attention due to their distinctive performance and broad applicability. While it has been previously shown that their efficacy in usage scenarios involving non-Western contexts falls short, existing studies are limited in scope, covering just a narrow range of cultures, focusing exclusively on a small number of cultural aspects, or evaluating a limited selection of models on a single task only. Towards globally inclusive LVLM research, we introduce GIMMICK, an extensive multimodal benchmark designed to assess a broad spectrum of cultural knowledge across 144 countries representing six global macro-regions. GIMMICK comprises six tasks built upon three new datasets that span 728 unique cultural events or facets on which we evaluated 20 LVLMs and 11 LLMs, including five proprietary and 26 open-weight models of all sizes. We systematically examine (1) regional cultural biases, (2) the influence of model size, (3) input modalities, and (4) external cues. Our analyses reveal strong biases toward Western cultures across models and tasks and highlight strong correlations between model size and performance, as well as the effectiveness of multimodal input and external geographic cues. We further find that models have more knowledge of tangible than intangible aspects (e.g., food vs. rituals) and that they excel in recognizing broad cultural origins but struggle with a more nuanced understanding.