Hummus: A Dataset of Humorous Multimodal Metaphor Use

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant deficiencies in understanding humorous multimodal metaphors. Method: We introduce the first systematically annotated dataset comprising 1,000 *New Yorker* cartoon–caption pairs, grounded in incongruity theory and conceptual metaphor theory. We propose the first systematic, computationally tractable annotation framework for metaphorical humor, enabling joint modeling of visual–textual incongruence. Annotation was conducted via expert collaboration, and MLLMs—including LLaVA and Qwen-VL—were rigorously evaluated under zero-shot and fine-tuned settings. Results: Our evaluation reveals fundamental bottlenecks in current MLLMs’ ability to identify cross-modal metaphorical mappings. This work fills three critical gaps: theoretical modeling of multimodal humor, standardized annotation guidelines, and a benchmark dataset. It establishes a new paradigm and foundational resource for multimodal humor comprehension and cognitive AI research.

Technology Category

Application Category

📝 Abstract
Metaphor and humor share a lot of common ground, and metaphor is one of the most common humorous mechanisms. This study focuses on the humorous capacity of multimodal metaphors, which has not received due attention in the community. We take inspiration from the Incongruity Theory of humor, the Conceptual Metaphor Theory, and the annotation scheme behind the VU Amsterdam Metaphor Corpus, and developed a novel annotation scheme for humorous multimodal metaphor use in image-caption pairs. We create the Hummus Dataset of Humorous Multimodal Metaphor Use, providing expert annotation on 1k image-caption pairs sampled from the New Yorker Caption Contest corpus. Using the dataset, we test state-of-the-art multimodal large language models (MLLMs) on their ability to detect and understand humorous multimodal metaphor use. Our experiments show that current MLLMs still struggle with processing humorous multimodal metaphors, particularly with regard to integrating visual and textual information. We release our dataset and code at github.com/xiaoyuisrain/humorous-multimodal-metaphor-use.
Problem

Research questions and friction points this paper is trying to address.

Analyzing humorous multimodal metaphor use in image-caption pairs
Developing annotation scheme for humorous multimodal metaphors
Testing MLLMs' ability to detect humorous multimodal metaphors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel annotation scheme for humorous metaphors
Hummus Dataset with expert annotations
Testing MLLMs on metaphor detection
🔎 Similar Papers
No similar papers found.