🤖 AI Summary
Existing visualization research lacks a standardized, empirically grounded metric for measuring trust, hindering cross-study comparison and synthesis. To address this, we introduce the first validated, multidimensional trust scale for visualization. Grounded in empirical user studies and exploratory factor analysis, we operationalize visualization trust along three theoretically and empirically supported dimensions: credibility, comprehensibility, and usability—while explicitly modeling individual general trust propensity as a moderating variable. The resulting 12-item scale demonstrates strong reliability, convergent and discriminant validity, and a robust three-factor structure. We validate its predictive power through two real-stakes behavioral trust experiments, confirming highly significant associations with actual trust behavior (p < 0.001). The scale exhibits high discriminative sensitivity and broad applicability across diverse visualization tasks and domains. This work provides a rigorous, generalizable instrument for quantifying and evaluating trust in visualization systems—advancing both empirical research and practical design assessment.
📝 Abstract
Trust plays a critical role in visual data communication and decision-making, yet existing visualization research employs varied trust measures, making it challenging to compare and synthesize findings across studies. In this work, we first took a bottom-up, data-driven approach to understand what visualization readers mean when they say they"trust"a visualization. We compiled and adapted a broad set of trust-related statements from existing inventories and collected responses on visualizations with varying degrees of trustworthiness. Through exploratory factor analysis, we derived an operational definition of trust in visualizations. Our findings indicate that people perceive a trustworthy visualization as one that presents credible information and is comprehensible and usable. Additionally, we found that general trust disposition influences how individuals assess visualization trustworthiness. Building on these insights, we developed a compact inventory consisting of statements that not only effectively represent each trust factor but also exhibit high item discrimination. We further validated our inventory through two trust games with real-world stakes, demonstrating that our measures reliably predict behavioral trust. Finally, we illustrate how this standardized inventory can be applied across diverse visualization research contexts. Utilizing our inventory, future research can examine how design choices, tasks, and domains influence trust, and how to foster appropriate trusting behavior in human-data interactions.