🤖 AI Summary
This work evaluates the implicit causal reasoning capabilities of vision-language models (VLMs) on infographics. To this end, we introduce InfoCausalQA—the first dedicated benchmark for infographic-based causal reasoning—comprising two core tasks: quantitative trend inference and semantic causal relation understanding across five categories. The benchmark emphasizes visual grounding and deep reasoning while explicitly mitigating reliance on superficial cues. It contains 494 real-world image-text pairs and 1,482 rigorously validated multiple-choice questions, generated by GPT-4o and refined via human annotation. Experimental results reveal, for the first time, that state-of-the-art VLMs underperform humans substantially on both tasks (average accuracy gap: −32.7%), exposing a fundamental deficiency in modeling latent causal logic embedded in infographics. This work establishes a high-quality, reproducible evaluation paradigm for multimodal causal reasoning, advancing rigorous assessment of VLMs’ higher-order inferential capacities.
📝 Abstract
Recent advances in Vision-Language Models (VLMs) have demonstrated impressive capabilities in perception and reasoning. However, the ability to perform causal inference -- a core aspect of human cognition -- remains underexplored, particularly in multimodal settings. In this study, we introduce InfoCausalQA, a novel benchmark designed to evaluate causal reasoning grounded in infographics that combine structured visual data with textual context. The benchmark comprises two tasks: Task 1 focuses on quantitative causal reasoning based on inferred numerical trends, while Task 2 targets semantic causal reasoning involving five types of causal relations: cause, effect, intervention, counterfactual, and temporal. We manually collected 494 infographic-text pairs from four public sources and used GPT-4o to generate 1,482 high-quality multiple-choice QA pairs. These questions were then carefully revised by humans to ensure they cannot be answered based on surface-level cues alone but instead require genuine visual grounding. Our experimental results reveal that current VLMs exhibit limited capability in computational reasoning and even more pronounced limitations in semantic causal reasoning. Their significantly lower performance compared to humans indicates a substantial gap in leveraging infographic-based information for causal inference. Through InfoCausalQA, we highlight the need for advancing the causal reasoning abilities of multimodal AI systems.