CrossCheck-Bench: Diagnosing Compositional Failures in Multimodal Conflict Resolution

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) rely on aligned image-text pair training and lack systematic evaluation of their capability to detect and resolve visual–textual conflicts in real-world scenarios. Method: We introduce the first diagnostic benchmark for cross-modal contradiction detection, covering seven atomic capabilities spanning perception, multimodal fusion, and multi-step logical reasoning. It comprises 15K expert-annotated, semantically calibrated question-answer pairs with synthetically injected contradictions. We further propose a novel joint symbolic reasoning and visual grounding analytical framework. Contribution/Results: Our analysis uncovers structural bottlenecks in MLLMs’ rule verification and multi-step logical inference. Evaluations across 13 state-of-the-art models reveal a significant performance drop on logical contradiction detection; conventional prompting strategies yield marginal gains, whereas symbol–vision co-reasoning methods substantially improve robustness.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models are primarily trained and evaluated on aligned image-text pairs, which leaves their ability to detect and resolve real-world inconsistencies largely unexplored. In open-domain applications visual and textual cues often conflict, requiring models to perform structured reasoning beyond surface-level alignment. We introduce CrossCheck-Bench, a diagnostic benchmark for evaluating contradiction detection in multimodal inputs. The benchmark adopts a hierarchical task framework covering three levels of reasoning complexity and defines seven atomic capabilities essential for resolving cross-modal inconsistencies. CrossCheck-Bench includes 15k question-answer pairs sourced from real-world artifacts with synthetically injected contradictions. The dataset is constructed through a multi-stage annotation pipeline involving more than 450 expert hours to ensure semantic validity and calibrated difficulty across perception, integration, and reasoning. We evaluate 13 state-of-the-art vision-language models and observe a consistent performance drop as tasks shift from perceptual matching to logical contradiction detection. Most models perform well on isolated entity recognition but fail when multiple clues must be synthesized for conflict reasoning. Capability-level analysis further reveals uneven skill acquisition, especially in tasks requiring multi-step inference or rule-based validation. Additional probing shows that conventional prompting strategies such as Chain-of-Thought and Set-of-Mark yield only marginal gains. By contrast, methods that interleave symbolic reasoning with grounded visual processing achieve more stable improvements. These results highlight a persistent bottleneck in multimodal reasoning and suggest new directions for building models capable of robust cross-modal verification.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing multimodal models' failure in detecting real-world inconsistencies
Evaluating contradiction detection across perception, integration, and reasoning levels
Addressing performance drops in logical conflict resolution versus surface alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical benchmark for multimodal contradiction detection
Multi-stage annotation pipeline ensuring semantic validity
Interleaving symbolic reasoning with visual processing
🔎 Similar Papers
No similar papers found.