🤖 AI Summary
This work investigates the capabilities and limitations of large vision-language models (LVLMs) in multimodal sarcasm detection and explanation. Addressing the insufficiency of existing methods in capturing visual-semantic misalignment—a hallmark of sarcastic content—we propose a training-free, lightweight framework that explicitly models semantic tension between image and text. Our approach integrates high-precision object detection, external conceptual knowledge retrieval, and context-aware consistency reasoning, without requiring model fine-tuning. It significantly improves both sarcasm intent recognition accuracy and the plausibility of generative explanations. We conduct systematic evaluations across multiple state-of-the-art LVLMs—including LLaVA and Qwen-VL—demonstrating superior performance: our method achieves new state-of-the-art results in both detection F1 score and explanation quality. The implementation is publicly available.
📝 Abstract
Sarcasm is a complex linguistic phenomenon that involves a disparity between literal and intended meanings, making it challenging for sentiment analysis and other emotion-sensitive tasks. While traditional sarcasm detection methods primarily focus on text, recent approaches have incorporated multimodal information. However, the application of Large Visual Language Models (LVLMs) in Multimodal Sarcasm Analysis (MSA) remains underexplored. In this paper, we evaluate LVLMs in MSA tasks, specifically focusing on Multimodal Sarcasm Detection and Multimodal Sarcasm Explanation. Through comprehensive experiments, we identify key limitations, such as insufficient visual understanding and a lack of conceptual knowledge. To address these issues, we propose a training-free framework that integrates in-depth object extraction and external conceptual knowledge to improve the model's ability to interpret and explain sarcasm in multimodal contexts. The experimental results on multiple models show the effectiveness of our proposed framework. The code is available at https://github.com/cp-cp/LVLM-MSA.