InfoCausalQA:Can Models Perform Non-explicit Causal Reasoning Based on Infographic?

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work evaluates the implicit causal reasoning capabilities of vision-language models (VLMs) on infographics. To this end, we introduce InfoCausalQA—the first dedicated benchmark for infographic-based causal reasoning—comprising two core tasks: quantitative trend inference and semantic causal relation understanding across five categories. The benchmark emphasizes visual grounding and deep reasoning while explicitly mitigating reliance on superficial cues. It contains 494 real-world image-text pairs and 1,482 rigorously validated multiple-choice questions, generated by GPT-4o and refined via human annotation. Experimental results reveal, for the first time, that state-of-the-art VLMs underperform humans substantially on both tasks (average accuracy gap: −32.7%), exposing a fundamental deficiency in modeling latent causal logic embedded in infographics. This work establishes a high-quality, reproducible evaluation paradigm for multimodal causal reasoning, advancing rigorous assessment of VLMs’ higher-order inferential capacities.

Technology Category

Application Category

📝 Abstract
Recent advances in Vision-Language Models (VLMs) have demonstrated impressive capabilities in perception and reasoning. However, the ability to perform causal inference -- a core aspect of human cognition -- remains underexplored, particularly in multimodal settings. In this study, we introduce InfoCausalQA, a novel benchmark designed to evaluate causal reasoning grounded in infographics that combine structured visual data with textual context. The benchmark comprises two tasks: Task 1 focuses on quantitative causal reasoning based on inferred numerical trends, while Task 2 targets semantic causal reasoning involving five types of causal relations: cause, effect, intervention, counterfactual, and temporal. We manually collected 494 infographic-text pairs from four public sources and used GPT-4o to generate 1,482 high-quality multiple-choice QA pairs. These questions were then carefully revised by humans to ensure they cannot be answered based on surface-level cues alone but instead require genuine visual grounding. Our experimental results reveal that current VLMs exhibit limited capability in computational reasoning and even more pronounced limitations in semantic causal reasoning. Their significantly lower performance compared to humans indicates a substantial gap in leveraging infographic-based information for causal inference. Through InfoCausalQA, we highlight the need for advancing the causal reasoning abilities of multimodal AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating causal reasoning in multimodal AI using infographics
Assessing VLMs' limitations in computational and semantic causal tasks
Bridging the gap between human and AI infographic-based reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces InfoCausalQA benchmark for causal reasoning
Uses infographic-text pairs for multimodal evaluation
Generates QA pairs with GPT-4o and human revision
🔎 Similar Papers
No similar papers found.
Keummin Ka
Keummin Ka
Yonsei University
J
Junhyeong Park
Yonsei University
J
Jahyun Jeon
Yonsei University
Y
Youngjae Yu
Yonsei University