🤖 AI Summary
Existing metaphor processing research heavily relies on English-dominant, Western-centric unimodal data, leading to overlooked cultural biases—especially in multimodal contexts, where their impact remains unexplored systematically. To address this, we introduce MultiMM, the first cross-cultural bilingual multimodal metaphor benchmark comprising 8,461 advertisement image-text pairs, annotated with fine-grained cultural metadata and bilingual alignment. We further propose SEMD, an emotion-enhanced metaphor detection model integrating CLIP/ViLT multimodal representations, emotion embeddings, contrastive learning, and domain adaptation. Experiments demonstrate that SEMD significantly outperforms baselines in both metaphor identification and sentiment analysis. MultiMM provides the first systematic evidence of how cultural factors shape multimodal metaphor interpretation, advancing fairness-aware cross-cultural NLP. All data and code are publicly released to ensure reproducibility and support inclusive modeling.
📝 Abstract
Metaphors are pervasive in communication, making them crucial for natural language processing (NLP). Previous research on automatic metaphor processing predominantly relies on training data consisting of English samples, which often reflect Western European or North American biases. This cultural skew can lead to an overestimation of model performance and contributions to NLP progress. However, the impact of cultural bias on metaphor processing, particularly in multimodal contexts, remains largely unexplored. To address this gap, we introduce MultiMM, a Multicultural Multimodal Metaphor dataset designed for cross-cultural studies of metaphor in Chinese and English. MultiMM consists of 8,461 text-image advertisement pairs, each accompanied by fine-grained annotations, providing a deeper understanding of multimodal metaphors beyond a single cultural domain. Additionally, we propose Sentiment-Enriched Metaphor Detection (SEMD), a baseline model that integrates sentiment embeddings to enhance metaphor comprehension across cultural backgrounds. Experimental results validate the effectiveness of SEMD on metaphor detection and sentiment analysis tasks. We hope this work increases awareness of cultural bias in NLP research and contributes to the development of fairer and more inclusive language models. Our dataset and code are available at https://github.com/DUTIR-YSQ/MultiMM.