Benchmarking Visual Language Models on Standardized Visualization Literacy Tests

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of visual language models (VLMs) lack standardization and experimental control, hindering rigorous assessment of visualization literacy—particularly chart comprehension and misleading visualization detection. Method: We introduce the first VLM benchmark framework tailored to visualization literacy, integrating randomized experimental design and structured prompt engineering to mitigate order effects and response bias across four state-of-the-art VLMs (GPT-4, Claude, Gemini, Llama) on VLAT and CALVI benchmarks. Contribution/Results: Claude achieves highest VLAT accuracy (67.9%), yet all models perform poorly on CALVI misleading visualization identification (≤30.0%). Line chart understanding is robust (76–96%), whereas bubble chart interpretation (18.6–61.4%) and anomaly detection (25–30%) remain critical weaknesses. Notably, Gemini exhibits a distinct uncertainty-management strategy, actively omitting 22.5% of questions. This work establishes the first empirical characterization of VLM capabilities and behavioral disparities across core dimensions of visualization literacy.

Technology Category

Application Category

📝 Abstract
The increasing integration of Visual Language Models (VLMs) into visualization systems demands a comprehensive understanding of their visual interpretation capabilities and constraints. While existing research has examined individual models, systematic comparisons of VLMs' visualization literacy remain unexplored. We bridge this gap through a rigorous, first-of-its-kind evaluation of four leading VLMs (GPT-4, Claude, Gemini, and Llama) using standardized assessments: the Visualization Literacy Assessment Test (VLAT) and Critical Thinking Assessment for Literacy in Visualizations (CALVI). Our methodology uniquely combines randomized trials with structured prompting techniques to control for order effects and response variability - a critical consideration overlooked in many VLM evaluations. Our analysis reveals that while specific models demonstrate competence in basic chart interpretation (Claude achieving 67.9% accuracy on VLAT), all models exhibit substantial difficulties in identifying misleading visualization elements (maximum 30.0% accuracy on CALVI). We uncover distinct performance patterns: strong capabilities in interpreting conventional charts like line charts (76-96% accuracy) and detecting hierarchical structures (80-100% accuracy), but consistent difficulties with data-dense visualizations involving multiple encodings (bubble charts: 18.6-61.4%) and anomaly detection (25-30% accuracy). Significantly, we observe distinct uncertainty management behavior across models, with Gemini displaying heightened caution (22.5% question omission) compared to others (7-8%). These findings provide crucial insights for the visualization community by establishing reliable VLM evaluation benchmarks, identifying areas where current models fall short, and highlighting the need for targeted improvements in VLM architectures for visualization tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluate VLMs' visualization literacy using standardized tests
Compare performance of leading VLMs on chart interpretation
Identify VLM weaknesses in detecting misleading visual elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized tests VLAT and CALVI evaluate VLMs
Randomized trials with structured prompting techniques
Analyzes VLM performance across visualization types
🔎 Similar Papers
No similar papers found.