🤖 AI Summary
Current vision-language models exhibit limited capability in inferring structured cultural metadata—such as creator, origin, and period—from images, and lack a systematic evaluation benchmark. This work addresses this gap by introducing the first multi-category, cross-cultural image benchmark dataset, along with attribute-level fine-grained evaluation metrics to assess model performance across diverse cultural contexts and metadata types. Leveraging a large language model as a judge (LLM-as-Judge), we conduct multidimensional evaluation using exact match, partial match, and attribute-level accuracy. Experimental results reveal that existing models rely on fragmented visual cues, yielding predictions that vary significantly across cultural backgrounds and attribute categories, and demonstrate poor consistency and interpretability.
📝 Abstract
Recent advances in vision-language models (VLMs) have improved image captioning for cultural heritage. However, inferring structured cultural metadata (e.g., creator, origin, period) from visual input remains underexplored. We introduce a multi-category, cross-cultural benchmark for this task and evaluate VLMs using an LLM-as-Judge framework that measures semantic alignment with reference annotations. To assess cultural reasoning, we report exact-match, partial-match, and attribute-level accuracy across cultural regions. Results show that models capture fragmented signals and exhibit substantial performance variation across cultures and metadata types, leading to inconsistent and weakly grounded predictions. These findings highlight the limitations of current VLMs in structured cultural metadata inference beyond visual perception.