🤖 AI Summary
Current multimodal large language models (MLLMs) lack visual theory of mind (ToM)—the capacity to infer and reason about human perceptual and cognitive responses to visualizations.
Method: We introduce CHARTOM, the first visual ToM benchmark dedicated to chart understanding, comprising two core tasks: factual chart interpretation (FACT) and detection of chart-induced misinterpretation (MIND)—the latter being the first incorporation of a social-cognitive dimension into visualization evaluation. Chart stimuli were manually constructed following cognitive psychology principles; evaluation employs multimodal prompting and human baseline calibration across leading MLLMs (GPT, Claude, Gemini, Qwen, Llama, LLaVA).
Contribution/Results: Experiments reveal that all state-of-the-art MLLMs in 2024 significantly underperform human annotators on CHARTOM, confirming the benchmark’s rigor and validity. CHARTOM establishes a new paradigm for trustworthy visual reasoning and provides concrete, actionable directions for improving MLLM interpretability and reliability in data visualization.
📝 Abstract
We introduce CHARTOM, a visual theory-of-mind benchmark for multimodal large language models. CHARTOM consists of specially designed data visualizing charts. Given a chart, a language model needs to not only correctly comprehend the chart (the FACT question) but also judge if the chart will be misleading to a human reader (the MIND question). Both questions have significant societal benefits. We detail the construction of the CHARTOM benchmark including its calibration on human performance. We benchmark leading LLMs as of late 2024 - including GPT, Claude, Gemini, Qwen, Llama, and Llava - on the CHARTOM dataset and found that our benchmark was challenging to all of them, suggesting room for future large language models to improve.