What's Missing in Vision-Language Models? Probing Their Struggles with Causal Order Reasoning

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) lack rigorous evaluation of causal temporal reasoning, and mainstream benchmarks are susceptible to spurious correlations from object/activity recognition shortcuts. Method: We introduce two dedicated benchmarks—VQA-Causal and VCR-Causal—that systematically isolate causal temporal understanding for the first time. We propose a “clean causal evaluation paradigm” involving causal perturbations, fine-grained diagnostic analysis, and hard negative sample fine-tuning to identify data-level deficiencies. Contribution/Results: We reveal that state-of-the-art VLMs achieve only ~52% accuracy—barely above random chance—due primarily to insufficient causal expression in training data. Targeted fine-tuning boosts causal reasoning accuracy by 18.7% without degrading downstream task performance. This work establishes the first strict, vision-centric benchmark for causal temporal reasoning and validates a transferable pathway for enhancing causal competence in VLMs.

Technology Category

Application Category

📝 Abstract
Despite the impressive performance of vision-language models (VLMs) on downstream tasks, their ability to understand and reason about causal relationships in visual inputs remains unclear. Robust causal reasoning is fundamental to solving complex high-level reasoning tasks, yet existing benchmarks often include a mixture of reasoning questions, and VLMs can frequently exploit object recognition and activity identification as shortcuts to arrive at the correct answers, making it challenging to truly assess their causal reasoning abilities. To bridge this gap, we introduce VQA-Causal and VCR-Causal, two new benchmarks specifically designed to isolate and rigorously evaluate VLMs' causal reasoning abilities. Our findings reveal that while VLMs excel in object and activity recognition, they perform poorly on causal reasoning tasks, often only marginally surpassing random guessing. Further analysis suggests that this limitation stems from a severe lack of causal expressions in widely used training datasets, where causal relationships are rarely explicitly conveyed. We additionally explore fine-tuning strategies with hard negative cases, showing that targeted fine-tuning can improve model's causal reasoning while maintaining generalization and downstream performance. Our study highlights a key gap in current VLMs and lays the groundwork for future work on causal understanding.
Problem

Research questions and friction points this paper is trying to address.

Assessing VLMs' unclear causal reasoning in visual inputs
Lack of causal expressions in VLM training datasets
Improving VLMs' causal reasoning via targeted fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing VQA-Causal and VCR-Causal benchmarks
Fine-tuning with hard negative cases
Analyzing lack of causal expressions in datasets
🔎 Similar Papers
No similar papers found.