Chimera: Diagnosing Shortcut Learning in Visual-Language Understanding

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether vision-language models (VLMs) possess genuine reasoning capability for chart understanding or merely rely on spurious shortcuts. Method: We introduce Chimera, a benchmark comprising 7,500 high-quality charts, annotated with semantic triplets and equipped with multi-level questions to systematically evaluate four core competencies—entity recognition, relational understanding, knowledge alignment, and visual reasoning. We propose the first three-category shortcut diagnostic framework—assessing visual memorization, knowledge recollection, and Clever-Hans effects—via controlled ablation studies and fine-grained quantitative analysis across 15 open-source VLMs. Contribution/Results: Our diagnosis reveals that state-of-the-art VLMs achieve high performance predominantly through language priors inducing Clever-Hans behavior; knowledge recollection exerts moderate influence, while visual memorization has negligible impact. These findings expose a widespread lack of authentic chart comprehension in current VLMs and underscore the need for robust, shortcut-aware multimodal evaluation standards.

Technology Category

Application Category

📝 Abstract
Diagrams convey symbolic information in a visual format rather than a linear stream of words, making them especially challenging for AI models to process. While recent evaluations suggest that vision-language models (VLMs) perform well on diagram-related benchmarks, their reliance on knowledge, reasoning, or modality shortcuts raises concerns about whether they genuinely understand and reason over diagrams. To address this gap, we introduce Chimera, a comprehensive test suite comprising 7,500 high-quality diagrams sourced from Wikipedia; each diagram is annotated with its symbolic content represented by semantic triples along with multi-level questions designed to assess four fundamental aspects of diagram comprehension: entity recognition, relation understanding, knowledge grounding, and visual reasoning. We use Chimera to measure the presence of three types of shortcuts in visual question answering: (1) the visual-memorization shortcut, where VLMs rely on memorized visual patterns; (2) the knowledge-recall shortcut, where models leverage memorized factual knowledge instead of interpreting the diagram; and (3) the Clever-Hans shortcut, where models exploit superficial language patterns or priors without true comprehension. We evaluate 15 open-source VLMs from 7 model families on Chimera and find that their seemingly strong performance largely stems from shortcut behaviors: visual-memorization shortcuts have slight impact, knowledge-recall shortcuts play a moderate role, and Clever-Hans shortcuts contribute significantly. These findings expose critical limitations in current VLMs and underscore the need for more robust evaluation protocols that benchmark genuine comprehension of complex visual inputs (e.g., diagrams) rather than question-answering shortcuts.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing shortcut learning in visual-language understanding models
Assessing genuine diagram comprehension versus memorization shortcuts
Evaluating reasoning capabilities on symbolic visual information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed Chimera test suite with 7500 annotated diagrams
Identified three shortcut types in visual question answering
Evaluated 15 VLMs revealing reliance on shortcut behaviors
🔎 Similar Papers
No similar papers found.