🤖 AI Summary
Existing visual literacy assessments predominantly focus on single tasks (e.g., numerical retrieval), neglecting the multidimensional nature of Visual Data Literacy (VDL)—including abstract reasoning, chart familiarity, aesthetic perception, and critical evaluation. Method: We developed MAVIL—the first publicly oriented, multidimensional VDL assessment tool—grounded in learning sciences and integrating educational measurement, cognitive psychology, and visualization human factors. It spans six dimensions, employs mixed-item formats (self-report + objective tasks), uses dual climate-related chart stimuli, and was administered via stratified sampling to a representative Austrian sample (n = 438). Contribution/Results: MAVIL pioneers a theoretically grounded, empirically validated decomposition of VDL into partially self-assessable dimensions, transcending task-centric paradigms. Results reveal substantial structural deficits: 48% of respondents committed errors on basic charts; 25% struggled with fundamental data units; and 19–20% lacked familiarity with common chart types—demonstrating widespread VDL gaps.
📝 Abstract
The ability to read, interpret, and critique data visualizations has mainly been assessed using data visualization tasks like value retrieval. Although evidence on different facets of Visual Data Literacy (VDL) is well established in visualization research and includes numeracy, graph familiarity, or aesthetic elements, they have not been sufficiently considered in ability assessments. Here, VDL is considered a multidimensional ability whose facets can be partially self-assessed. We introduce an assessment in which VDL is deconstructed as a process of understanding, in reference to frameworks from the learning sciences. MAVIL, Multidimensional Assessment of Visual Data Literacy, is composed of six ability dimensions: General Impression/Abstract Thinking, Graph Elements/Familiarity, Aesthetic Perception, Visualization Criticism, Data Reading Tasks and Numeracy/Topic Knowledge. MAVIL was designed for general audiences and implemented in a survey (n=438), representative of Austria's age groups (18-74 years) and gender split. The survey mirrors the population's VDL and shows the perception of two climate data visualizations, a line and bar chart. We found that $48%$ of respondents make mistakes with the simple charts, while $5%$ believe that they cannot summarize the visualization content. About a quarter have deficits in comprehending simple data units, and $19-20%$ are unfamiliar with each displayed chart type.