CHARTOM: A Visual Theory-of-Mind Benchmark for Multimodal Large Language Models

📅 2024-08-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) lack visual theory of mind (ToM)—the capacity to infer and reason about human perceptual and cognitive responses to visualizations. Method: We introduce CHARTOM, the first visual ToM benchmark dedicated to chart understanding, comprising two core tasks: factual chart interpretation (FACT) and detection of chart-induced misinterpretation (MIND)—the latter being the first incorporation of a social-cognitive dimension into visualization evaluation. Chart stimuli were manually constructed following cognitive psychology principles; evaluation employs multimodal prompting and human baseline calibration across leading MLLMs (GPT, Claude, Gemini, Qwen, Llama, LLaVA). Contribution/Results: Experiments reveal that all state-of-the-art MLLMs in 2024 significantly underperform human annotators on CHARTOM, confirming the benchmark’s rigor and validity. CHARTOM establishes a new paradigm for trustworthy visual reasoning and provides concrete, actionable directions for improving MLLM interpretability and reliability in data visualization.

Technology Category

Application Category

📝 Abstract
We introduce CHARTOM, a visual theory-of-mind benchmark for multimodal large language models. CHARTOM consists of specially designed data visualizing charts. Given a chart, a language model needs to not only correctly comprehend the chart (the FACT question) but also judge if the chart will be misleading to a human reader (the MIND question). Both questions have significant societal benefits. We detail the construction of the CHARTOM benchmark including its calibration on human performance. We benchmark leading LLMs as of late 2024 - including GPT, Claude, Gemini, Qwen, Llama, and Llava - on the CHARTOM dataset and found that our benchmark was challenging to all of them, suggesting room for future large language models to improve.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multimodal LLMs' chart comprehension accuracy (FACT question).
Assesses if models detect misleading charts for humans (MIND question).
Tests leading 2024 LLMs, revealing significant performance gaps.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Theory-of-Mind benchmark for MLLMs
Charts with FACT and MIND questions
Calibrated on human performance metrics
🔎 Similar Papers
No similar papers found.
S
S. Bharti
University of Wisconsin–Madison
S
Shiyun Cheng
University of Wisconsin–Madison
J
Jihyun Rho
University of Wisconsin–Madison
M
Martina Rao
ETH Zurich
Xiaojin Zhu
Xiaojin Zhu
Professor in Computer Science at University of Wisconsin-Madison
Machine Learning