Understanding Figurative Meaning through Explainable Visual Entailment

📅 2024-05-02
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (VLMs) exhibit severe deficiencies in comprehending rhetorical expressions—such as metaphor, irony, and humor—in both images and text. Method: We formally define and introduce the novel task of *rhetorical visual entailment*, requiring models to determine entailment relations between an image and a rhetorical textual description while generating interpretable reasoning traces. To support this task, we construct V-FLUTE, the first expert-validated, multi-rhetoric dataset (6,027 samples), curated via a multi-stage human-AI collaborative annotation pipeline. Contribution/Results: Zero-shot and fine-tuned evaluations reveal that state-of-the-art VLMs demonstrate extremely poor generalization in image-side rhetorical understanding. Systematic human error analysis identifies three dominant failure modes in reasoning. This work establishes a new benchmark and methodological foundation for advancing visual rhetorical cognition modeling and interpretable multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (VLMs) have demonstrated strong capabilities in tasks requiring a fine-grained understanding of literal meaning in images and text, such as visual question-answering or visual entailment. However, there has been little exploration of these models' capabilities when presented with images and captions containing figurative meaning, such as metaphors or humor. To close this gap, we propose a new task framing the figurative meaning understanding problem as an explainable visual entailment task, where the model has to predict whether the image (premise) entails a caption (hypothesis) and justify the predicted label with a textual explanation. The figurative phenomena can be present either in the image, the caption, or both. Utilizing a human-AI collaboration approach, we build the accompanying expert-verified dataset V-FLUTE, containing 6,027 {image, caption, label, explanation} instances spanning five diverse figurative phenomena: metaphors, similes, idioms, sarcasm, and humor. Through automatic evaluation, we find that VLMs struggle to generalize from literal to figurative meaning, particularly when it is present in images. Further, we identify common types of errors in VLM reasoning via human evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLMs' understanding of figurative meaning in images and text
Proposing explainable visual entailment for metaphors, humor, sarcasm
Addressing VLMs' generalization gap from literal to figurative interpretation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable visual entailment task framing
Human-AI collaboration dataset construction
Evaluation of figurative meaning generalization challenges
🔎 Similar Papers
No similar papers found.