🤖 AI Summary
This work addresses the challenge that current vision-language models struggle to interpret visual metonymy—indirect visual references conveyed through associative cues—and lack the capacity for cognitive reasoning about implicit concepts. It presents the first systematic study on computational modeling of visual metonymy, proposing an integrated generation-and-evaluation framework grounded in semiotic theory. The framework leverages large language models and text-to-image generators to produce metonymic visual representations and introduces ViMET, the first benchmark dataset for visual metonymy comprising 2,000 multiple-choice questions. Experimental results show that humans achieve an accuracy of 86.9% on this task, whereas state-of-the-art multimodal models reach only 65.9%, revealing a significant gap in models’ ability to comprehend indirect visual reference.
📝 Abstract
Images often communicate more than they literally depict: a set of tools can suggest an occupation and a cultural artifact can suggest a tradition. This kind of indirect visual reference, known as visual metonymy, invites viewers to recover a target concept via associated cues rather than explicit depiction. In this work, we present the first computational investigation of visual metonymy. We introduce a novel pipeline grounded in semiotic theory that leverages large language models and text-to-image models to generate metonymic visual representations. Using this framework, we construct ViMET, the first visual metonymy dataset comprising 2,000 multiple-choice questions to evaluate the cognitive reasoning abilities in multimodal language models. Experimental results on our dataset reveal a significant gap between human performance (86.9%) and state-of-the-art vision-language models (65.9%), highlighting limitations in machines'ability to interpret indirect visual references. Our dataset is publicly available at: https://github.com/cincynlp/ViMET.