🤖 AI Summary
This work addresses the challenge of understanding potentially idiomatic expressions (PIEs) in multilingual and multimodal systems, with a focus on their cross-lingual and cross-modal transfer capabilities. To this end, we introduce XMPIE—the first large-scale, parallel, multilingual multimodal benchmark for idiom comprehension—spanning 34 languages and featuring over ten thousand expert-annotated idioms. Each idiom is paired with a fine-grained image spectrum comprising five images that progressively illustrate its interpretation from literal to metaphorical meaning. Designed to support both cross-lingual and text-image multimodal evaluation, XMPIE enables the first systematic analysis of cultural commonalities and model generalization in idiom understanding, providing a high-quality, transferable evaluation platform for advancing research in this domain.
📝 Abstract
Potentially idiomatic expressions (PIEs) construe meanings inherently tied to the everyday experience of a given language community. As such, they constitute an interesting challenge for assessing the linguistic (and to some extent cultural) capabilities of NLP systems. In this paper, we present XMPIE, a parallel multilingual and multimodal dataset of potentially idiomatic expressions. The dataset, containing 34 languages and over ten thousand items, allows comparative analyses of idiomatic patterns among language-specific realisations and preferences in order to gather insights about shared cultural aspects. This parallel dataset allows to evaluate model performance for a given PIE in different languages and whether idiomatic understanding in one language can be transferred to another. Moreover, the dataset supports the study of PIEs across textual and visual modalities, to measure to what extent PIE understanding in one modality transfers or implies in understanding in another modality (text vs. image). The data was created by language experts, with both textual and visual components crafted under multilingual guidelines, and each PIE is accompanied by five images representing a spectrum from idiomatic to literal meanings, including semantically related and random distractors. The result is a high-quality benchmark for evaluating multilingual and multimodal idiomatic language understanding.