Chart-HQA: A Benchmark for Hypothetical Question Answering in Charts

๐Ÿ“… 2025-03-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing multimodal large language models (MLLMs) often rely on parametric memorization rather than genuine comprehension in chart question answering, making it difficult to assess their counterfactual reasoning capabilities. Method: We introduce HQA, the first benchmark for *hypothetical chart question answering*, requiring models to perform causal or counterfactual inference grounded in chart contentโ€”not memorized patterns. To construct high-quality data efficiently, we propose HAI, a human-AI collaboration framework integrating domain-expert verification, LLM-based text editing, counterfactual question modeling, and multi-stage chart-text alignment. Contribution/Results: Evaluation across 18 state-of-the-art MLLMs reveals critically low HQA performance (average accuracy: 32.7%) and severe imbalances in reasoning capability. This work is the first to systematically expose MLLMsโ€™ fundamental limitations in deep semantic reasoning over charts, establishing a new evaluation paradigm and scalable methodology for multimodal understanding assessment.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal Large Language Models (MLLMs) have garnered significant attention for their strong visual-semantic understanding. Most existing chart benchmarks evaluate MLLMs' ability to parse information from charts to answer questions.However, they overlook the inherent output biases of MLLMs, where models rely on their parametric memory to answer questions rather than genuinely understanding the chart content. To address this limitation, we introduce a novel Chart Hypothetical Question Answering (HQA) task, which imposes assumptions on the same question to compel models to engage in counterfactual reasoning based on the chart content. Furthermore, we introduce HAI, a human-AI interactive data synthesis approach that leverages the efficient text-editing capabilities of LLMs alongside human expert knowledge to generate diverse and high-quality HQA data at a low cost. Using HAI, we construct Chart-HQA, a challenging benchmark synthesized from publicly available data sources. Evaluation results on 18 MLLMs of varying model sizes reveal that current models face significant generalization challenges and exhibit imbalanced reasoning performance on the HQA task.
Problem

Research questions and friction points this paper is trying to address.

Addresses MLLMs' reliance on parametric memory over chart understanding.
Introduces Chart Hypothetical Question Answering (HQA) for counterfactual reasoning.
Proposes HAI for cost-effective, diverse HQA data synthesis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Chart Hypothetical Question Answering (HQA) task
Uses HAI for human-AI interactive data synthesis
Constructs Chart-HQA benchmark from public data
๐Ÿ”Ž Similar Papers
No similar papers found.