Reasoning Beyond Literal: Cross-style Multimodal Reasoning for Figurative Language Understanding

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models struggle to interpret multimodal rhetorical language—such as irony, humor, and metaphor—due to the semantic incongruity between images and text and the underlying subjective intent. This work proposes a lightweight, three-stage framework that models cross-modal reasoning across diverse rhetorical styles and generates interpretable reasoning traces to support generalization across styles. Notably, it achieves the first successful transfer of multimodal reasoning across distinct rhetorical styles (e.g., from irony to humor), enhancing model transparency and generalization through verifiable reasoning pathways. Experimental results demonstrate that the proposed model significantly outperforms both larger open-source and closed-source counterparts across four rhetorical styles, confirming its efficiency and strong transferability.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have demonstrated strong reasoning abilities in literal multimodal tasks such as visual mathematics and science question answering. However, figurative language, such as sarcasm, humor, and metaphor, remains a significant challenge, as it conveys intent and emotion through subtle incongruities between expressed and intended meanings. In multimodal settings, accompanying images can amplify or invert textual meaning, demanding models that reason across modalities and account for subjectivity. We propose a three-step framework for developing efficient multimodal reasoning models that can (i) interpret multimodal figurative language, (ii) provide transparent reasoning traces, and (iii) generalize across multiple figurative styles. Experiments across four styles show that (1) incorporating reasoning traces substantially improves multimodal figurative understanding, (2) reasoning learned in one style can transfer to others, especially between related styles like sarcasm and humor, and (3) training jointly across styles yields a generalized reasoning VLM that outperforms much larger open- and closed-source models. Our findings show that lightweight VLMs with verifiable reasoning achieve robust cross-style generalization while providing inspectable reasoning traces for multimodal tasks. The code and implementation are available at https://github.com/scheshmi/CrossStyle-MMR.
Problem

Research questions and friction points this paper is trying to address.

figurative language
multimodal reasoning
vision-language models
cross-style generalization
sarcasm
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal reasoning
figurative language understanding
reasoning traces
cross-style generalization
vision-language models
🔎 Similar Papers
No similar papers found.
S
Seyyed Saeid Cheshmi
University of Minnesota
H
Hahnemann Ortiz
University of Minnesota
J
James Mooney
University of Minnesota
Dongyeop Kang
Dongyeop Kang
University of Minnesota
Natural Language Processing