๐ค AI Summary
This study addresses a critical gap in adolescent AI trust research by examining Chinese middle and high school students (ages 12โ18). It investigates how AI literacy, self-identity, social anxiety, and psychological resilience jointly influence trust in AI chatbots. Using a mixed-methods design, the study integrates an online survey (N = 1,247) with in-depth interviews (n = 32), analyzed via structural equation modeling and thematic analysis. Key contributions include: (1) the first empirical demonstration that psychological resilience significantly and positively predicts AI trust among adolescents; (2) identification of age as a significant moderator in the relationship between social anxiety and AI trust; and (3) evidence of widespread overconfidence in AI literacy and elevated baseline trust levels among adolescentsโboth highly susceptible to external influences such as social media exposure. These findings provide empirically grounded theoretical insights and practical implications for designing AI literacy curricula and fostering responsible human-AI trust development in youth populations.
๐ Abstract
Chatbots have become increasingly prevalent. A growing body of research focused on the issue of human trust in AI. However, most existing user studies are conducted primarily with adult groups, overlooking teenagers who are also engaging more frequently with AI technologies. Based on previous theories about teenage education and psychology, this study investigates the correlation between teenagers' psychological characteristics and their trust in AI chatbots, examining four key variables: AI literacy, ego identity, social anxiety, and psychological resilience. We adopted a mixed-methods approach, combining an online survey with semi-structured interviews. Our findings reveal that psychological resilience is a significant positive predictor of trust in AI, and that age significantly moderates the relationship between social anxiety and trust. The interviews further suggest that teenagers generally report relatively high levels of trust in AI, tend to overestimate their AI literacy, and are influenced by external factors such as social media.