Eye of the Beholder: Towards Measuring Visualization Complexity

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the perceptual complexity of visual charts and its determinants. To address the lack of large-scale human-annotated complexity data, we conduct a crowdsourced experiment to construct the first extensive dataset of human subjective complexity ratings. We systematically evaluate three classes of prediction approaches: traditional image-analysis metrics, hand-crafted features, and large language models (LLMs). Results reveal a significant misalignment between conventional image-complexity metrics and human perception; in contrast, zero-shot GPT-4o mini—without fine-tuning—achieves high predictive accuracy (Pearson’s *r* = 0.82), empirically supporting the core hypothesis that “visual complexity is cognitively grounded in the observer.” Our work establishes an LLM-based paradigm for automated complexity assessment, and we publicly release the full dataset, annotation protocol, and implementation code. This provides a scalable, reproducible foundation for computationally grounded evaluation and intelligent optimization of visualization design.

Technology Category

Application Category

📝 Abstract
Constructing expressive and legible visualizations is a key activity for visualization designers. While numerous design guidelines exist, research on how specific graphical features affect perceived visual complexity remains limited. In this paper, we report on a crowdsourced study to collect human ratings of perceived complexity for diverse visualizations. Using these ratings as ground truth, we then evaluated three methods to estimate this perceived complexity: image analysis metrics, multilinear regression using manually coded visualization features, and automated feature extraction using a large language model (LLM). Image complexity metrics showed no correlation with human-perceived visualization complexity. Manual feature coding produced a reasonable predictive model but required substantial effort. In contrast, a zero-shot LLM (GPT-4o mini) demonstrated strong capabilities in both rating complexity and extracting relevant features. Our findings suggest that visualization complexity is truly in the eye of the beholder, yet can be effectively approximated using zero-shot LLM prompting, offering a scalable approach for evaluating the complexity of visualizations. The dataset and code for the study and data analysis can be found at https://osf.io/w85a4/
Problem

Research questions and friction points this paper is trying to address.

Measuring how graphical features affect perceived visualization complexity
Evaluating methods to estimate human-perceived complexity of visualizations
Assessing zero-shot LLM capabilities for approximating visualization complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-shot LLM prompting approximates visualization complexity effectively
Manual feature coding builds predictive models with substantial effort
Image analysis metrics show no correlation with human-perceived complexity
🔎 Similar Papers
No similar papers found.