🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant deficiencies in understanding non-Western cultural contexts—particularly China’s intangible cultural heritage—due to Western-centric training data and evaluation benchmarks. Method: We introduce TCC-Bench, the first bilingual (Chinese–English) visual question answering benchmark explicitly designed for traditional Chinese culture. It covers diverse domains—including artifacts, folk customs, and domestic animation—and pioneers “implicit cultural concept questioning” to mitigate linguistic bias and data leakage. Annotation employs a semi-automatic pipeline integrating GPT-4o-assisted generation with expert human validation to ensure cultural fidelity and evaluation robustness. Contribution/Results: Comprehensive evaluation across 30+ state-of-the-art MLLMs reveals an average accuracy gap of 42.6% relative to human performance on cultural understanding tasks, exposing critical bottlenecks in culturally grounded multimodal reasoning. TCC-Bench establishes a standardized, reproducible infrastructure for developing and evaluating culture-adapted multimodal models.
📝 Abstract
Recent progress in Multimodal Large Language Models (MLLMs) have significantly enhanced the ability of artificial intelligence systems to understand and generate multimodal content. However, these models often exhibit limited effectiveness when applied to non-Western cultural contexts, which raises concerns about their wider applicability. To address this limitation, we propose the extbf{T}raditional extbf{C}hinese extbf{C}ulture understanding extbf{Bench}mark ( extbf{TCC-Bench}), a bilingual ( extit{i.e.}, Chinese and English) Visual Question Answering (VQA) benchmark specifically designed for assessing the understanding of traditional Chinese culture by MLLMs. TCC-Bench comprises culturally rich and visually diverse data, incorporating images from museum artifacts, everyday life scenes, comics, and other culturally significant contexts. We adopt a semi-automated pipeline that utilizes GPT-4o in text-only mode to generate candidate questions, followed by human curation to ensure data quality and avoid potential data leakage. The benchmark also avoids language bias by preventing direct disclosure of cultural concepts within question texts. Experimental evaluations across a wide range of MLLMs demonstrate that current models still face significant challenges when reasoning about culturally grounded visual content. The results highlight the need for further research in developing culturally inclusive and context-aware multimodal systems. The code and data can be found at: https://github.com/Morty-Xu/TCC-Bench.