World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the ability of large vision-language models (LVLMs) to maintain consistent cultural identity recognition in culturally mixed visual scenes—e.g., images containing multiple culturally distinct foods. Addressing a critical gap in prior research, we formally introduce the “cultural mixture understanding” challenge and propose CultureMix, a novel food-centric visual question answering benchmark comprising 23K synthetically generated (diffusion-based) and human-validated images, covering multi-subtask evaluations of food–context co-occurrences. Comprehensive evaluation reveals that ten state-of-the-art LVLMs suffer an average 14% accuracy drop on CultureMix, exhibiting severe background dependency and inconsistent cross-context predictions. To mitigate these issues, we perform supervised fine-tuning using diverse culturally mixed data, significantly improving cultural consistency and suppressing background interference. Our contributions include: (i) the first systematic benchmark and problem formulation for cultural mixture understanding; (ii) empirical evidence of LVLMs’ cultural reasoning fragility; and (iii) an effective fine-tuning strategy advancing robust cross-cultural visual understanding.

Technology Category

Application Category

📝 Abstract
In a globalized world, cultural elements from diverse origins frequently appear together within a single visual scene. We refer to these as culture mixing scenarios, yet how Large Vision-Language Models (LVLMs) perceive them remains underexplored. We investigate culture mixing as a critical challenge for LVLMs and examine how current models behave when cultural items from multiple regions appear together. To systematically analyze these behaviors, we construct CultureMix, a food Visual Question Answering (VQA) benchmark with 23k diffusion-generated, human-verified culture mixing images across four subtasks: (1) food-only, (2) food+food, (3) food+background, and (4) food+food+background. Evaluating 10 LVLMs, we find consistent failures to preserve individual cultural identities in mixed settings. Models show strong background reliance, with accuracy dropping 14% when cultural backgrounds are added to food-only baselines, and they produce inconsistent predictions for identical foods across different contexts. To address these limitations, we explore three robustness strategies. We find supervised fine-tuning using a diverse culture mixing dataset substantially improve model consistency and reduce background sensitivity. We call for increased attention to culture mixing scenarios as a critical step toward developing LVLMs capable of operating reliably in culturally diverse real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Investigates how LVLMs perceive mixed cultural elements in visual scenes
Examines model failures in preserving cultural identities in mixed settings
Proposes strategies to improve model consistency and reduce background reliance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructed CultureMix benchmark with 23k diffusion-generated images
Evaluated 10 LVLMs revealing failures in preserving cultural identities
Used supervised fine-tuning with diverse culture mixing dataset
🔎 Similar Papers
No similar papers found.