🤖 AI Summary
Large Vision-Language Models (VLMs) exhibit significant deficiencies in understanding contradiction-based humor—such as “YES/BUT” juxtaposition cartoons—that requires comparative reasoning, thereby limiting their human-like inferential and cultural comprehension capabilities. To address this, we introduce YesBut (V2), a multilingual, multicultural cartoon benchmark, and design four progressively challenging tasks to systematically evaluate end-to-end narrative understanding—from superficial perception to cross-modal contrastive reasoning. We propose the first fine-grained narrative understanding framework tailored to contradiction-based humor, uncovering a structural deficit in VLMs’ contrastive reasoning (42.6% lower than human performance). Our method innovatively integrates social-knowledge injection, multimodal contrastive learning, and hallucination-aware key-element localization. This yields up to 18.3% absolute accuracy gain on critical tasks, markedly improving robustness in identifying and reasoning about contradictory elements.
📝 Abstract
Understanding humor-particularly when it involves complex, contradictory narratives that require comparative reasoning-remains a significant challenge for large vision-language models (VLMs). This limitation hinders AI's ability to engage in human-like reasoning and cultural expression. In this paper, we investigate this challenge through an in-depth analysis of comics that juxtapose panels to create humor through contradictions. We introduce the YesBut (V2), a novel benchmark with 1,262 comic images from diverse multilingual and multicultural contexts, featuring comprehensive annotations that capture various aspects of narrative understanding. Using this benchmark, we systematically evaluate a wide range of VLMs through four complementary tasks spanning from surface content comprehension to deep narrative reasoning, with particular emphasis on comparative reasoning between contradictory elements. Our extensive experiments reveal that even the most advanced models significantly underperform compared to humans, with common failures in visual perception, key element identification, comparative analysis and hallucinations. We further investigate text-based training strategies and social knowledge augmentation methods to enhance model performance. Our findings not only highlight critical weaknesses in VLMs' understanding of cultural and creative expressions but also provide pathways toward developing context-aware models capable of deeper narrative understanding though comparative reasoning.