🤖 AI Summary
This study addresses two critical gaps in AI trust measurement: the lack of adolescent-appropriate instruments and conceptual ambiguity regarding the relationship between trust and distrust. We conducted a large-scale (N = 1,485), preregistered, cross-scenario (autonomous driving vs. chatbots) validation of two prominent scales—the Trust in Artificial Intelligence (TAI) and the Perceived Trustworthiness of AI (TPA)—using confirmatory factor analysis (CFA) and classical reliability assessment. Results provide the first empirical evidence that trust and distrust are distinct, co-occurring constructs—necessitating their concurrent measurement. The TAI demonstrated strong psychometric properties (validity and reliability), establishing it as a robust tool for AI trust research. In contrast, the TPA exhibited structural deficiencies in its dimensional configuration, prompting concrete recommendations for revision. Collectively, this work establishes a methodological benchmark for AI trust assessment and advances the field toward developmental, differentiated, and ecologically valid evaluation frameworks for trustworthy AI.
📝 Abstract
Despite the importance of trust in human-AI interactions, researchers must adopt questionnaires from other disciplines that lack validation in the AI context. Motivated by the need for reliable and valid measures, we investigated the psychometric quality of two trust questionnaires, the Trust between People and Automation scale (TPA) by Jian et al. (2000) and the Trust Scale for the AI Context (TAI) by Hoffman et al. (2023). In a pre-registered online experiment (N = 1485), participants observed interactions with trustworthy and untrustworthy AI (autonomous vehicle and chatbot). Results support the psychometric quality of the TAI while revealing opportunities to improve the TPA, which we outline in our recommendations for using the two questionnaires. Furthermore, our findings provide additional empirical evidence of trust and distrust as two distinct constructs that may coexist independently. Building on our findings, we highlight the opportunities and added value of measuring both trust and distrust in human-AI research and advocate for further work on both constructs.