Probing the Visualization Literacy of Vision Language Models: the Good, the Bad, and the Ugly

๐Ÿ“… 2025-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work evaluates visual literacy in vision-language models (VLMs) for chart understanding, emphasizing interpretability of internal reasoningโ€”not merely final answer accuracy. Method: We adapt attention-guided class activation mapping (AG-CAM) to early-fusion VLM architectures for joint visualization of image- and text-modality contributions to model decisions. Contribution/Results: We introduce the first open-source, reproducible framework for explainability analysis in VLM-based chart understanding, benchmarking ChartGemma (3B), Janus, LLaVA, GPT-4o, and Gemini. Experiments show all models exhibit spatial localization of key chart elements and semantic alignment between data and textual descriptions; ChartGemma (3B) matches the performance of large proprietary models; and model attributions demonstrate preliminary consistency with human reasoning patterns.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision Language Models (VLMs) demonstrate promising chart comprehension capabilities. Yet, prior explorations of their visualization literacy have been limited to assessing their response correctness and fail to explore their internal reasoning. To address this gap, we adapted attention-guided class activation maps (AG-CAM) for VLMs, to visualize the influence and importance of input features (image and text) on model responses. Using this approach, we conducted an examination of four open-source (ChartGemma, Janus 1B and 7B, and LLaVA) and two closed-source (GPT-4o, Gemini) models comparing their performance and, for the open-source models, their AG-CAM results. Overall, we found that ChartGemma, a 3B parameter VLM fine-tuned for chart question-answering (QA), outperformed other open-source models and exhibited performance on par with significantly larger closed-source VLMs. We also found that VLMs exhibit spatial reasoning by accurately localizing key chart features, and semantic reasoning by associating visual elements with corresponding data values and query tokens. Our approach is the first to demonstrate the use of AG-CAM on early fusion VLM architectures, which are widely used, and for chart QA. We also show preliminary evidence that these results can align with human reasoning. Our promising open-source VLMs results pave the way for transparent and reproducible research in AI visualization literacy.
Problem

Research questions and friction points this paper is trying to address.

Assessing visualization literacy of Vision Language Models (VLMs)
Exploring internal reasoning of VLMs using AG-CAM
Comparing performance of open-source and closed-source VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted AG-CAM for VLMs to visualize input influence
Evaluated VLMs using AG-CAM on chart QA tasks
Demonstrated AG-CAM on early fusion VLM architectures