To Trust or Distrust Trust Measures: Validating Questionnaires for Trust in AI

📅 2024-03-01
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses two critical gaps in AI trust measurement: the lack of adolescent-appropriate instruments and conceptual ambiguity regarding the relationship between trust and distrust. We conducted a large-scale (N = 1,485), preregistered, cross-scenario (autonomous driving vs. chatbots) validation of two prominent scales—the Trust in Artificial Intelligence (TAI) and the Perceived Trustworthiness of AI (TPA)—using confirmatory factor analysis (CFA) and classical reliability assessment. Results provide the first empirical evidence that trust and distrust are distinct, co-occurring constructs—necessitating their concurrent measurement. The TAI demonstrated strong psychometric properties (validity and reliability), establishing it as a robust tool for AI trust research. In contrast, the TPA exhibited structural deficiencies in its dimensional configuration, prompting concrete recommendations for revision. Collectively, this work establishes a methodological benchmark for AI trust assessment and advances the field toward developmental, differentiated, and ecologically valid evaluation frameworks for trustworthy AI.

Technology Category

Application Category

📝 Abstract
Despite the importance of trust in human-AI interactions, researchers must adopt questionnaires from other disciplines that lack validation in the AI context. Motivated by the need for reliable and valid measures, we investigated the psychometric quality of two trust questionnaires, the Trust between People and Automation scale (TPA) by Jian et al. (2000) and the Trust Scale for the AI Context (TAI) by Hoffman et al. (2023). In a pre-registered online experiment (N = 1485), participants observed interactions with trustworthy and untrustworthy AI (autonomous vehicle and chatbot). Results support the psychometric quality of the TAI while revealing opportunities to improve the TPA, which we outline in our recommendations for using the two questionnaires. Furthermore, our findings provide additional empirical evidence of trust and distrust as two distinct constructs that may coexist independently. Building on our findings, we highlight the opportunities and added value of measuring both trust and distrust in human-AI research and advocate for further work on both constructs.
Problem

Research questions and friction points this paper is trying to address.

AI Trust Measurement
Questionnaire Validation
Human-AI Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trust Measurement
Artificial Intelligence
Simultaneous Trust and Distrust
N
Nicolas Scharowski
Center for General Psychology and Methodology, University of Basel, Switzerland
S
Sebastian A. C. Perrig
Center for General Psychology and Methodology, University of Basel, Switzerland
L
Lena Fanya Aeschbach
Center for General Psychology and Methodology, University of Basel, Switzerland
N
Nick von Felten
Center for General Psychology and Methodology, University of Basel, Switzerland
Klaus Opwis
Klaus Opwis
University of Basel, Switzerland
Cognitive PsychologyMemoryHuman Computer Interaction
Philipp Wintersberger
Philipp Wintersberger
IT:U Linz
Computer Science
F
Florian Brühlmann
Center for General Psychology and Methodology, University of Basel, Switzerland