Understanding Human-AI Trust in Education

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In AI-powered educational applications, students’ trust in anthropomorphic chatbots lacks theoretical clarity—falling neither under interpersonal trust (which entails intentionality and moral attribution) nor conventional technological trust (centered on reliability and performance). Method: This study proposes “human–AI trust” as a novel paradigm, transcending the traditional dichotomy. Using partial least squares structural equation modeling (PLS-SEM), we analyze multi-source empirical data to distinguish human-like trust (primarily driving trust intention) from system-like trust (more strongly predicting actual usage and perceived usefulness). Contribution/Results: Both dimensions jointly explain learning enjoyment, confirming the hybrid and distinctive nature of human–AI trust. The findings establish a domain-specific theoretical foundation for trust in AI education and uncover differential mechanisms through which distinct trust dimensions influence learning engagement, usage behavior, and perceived value.

Technology Category

Application Category

📝 Abstract
As AI chatbots become increasingly integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambiguity regarding whether students develop trust toward them as they would a human peer or instructor, based in interpersonal trust, or as they would any other piece of technology, based in technology trust. This ambiguity presents theoretical challenges, as interpersonal trust models may inappropriately ascribe human intentionality and morality to AI, while technology trust models were developed for non-social technologies, leaving their applicability to anthropomorphic systems unclear. To address this gap, we investigate how human-like and system-like trusting beliefs comparatively influence students' perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness of an AI chatbot - factors associated with students' engagement and learning outcomes. Through partial least squares structural equation modeling, we found that human-like and system-like trust significantly influenced student perceptions, with varied effects. Human-like trust more strongly predicted trusting intention, while system-like trust better predicted behavioral intention and perceived usefulness. Both had similar effects on perceived enjoyment. Given the partial explanatory power of each type of trust, we propose that students develop a distinct form of trust with AI chatbots (human-AI trust) that differs from human-human and human-technology models of trust. Our findings highlight the need for new theoretical frameworks specific to human-AI trust and offer practical insights for fostering appropriately calibrated trust, which is critical for the effective adoption and pedagogical impact of AI in education.
Problem

Research questions and friction points this paper is trying to address.

Examining how students trust AI chatbots in education
Comparing human-like vs system-like trust effects on learning
Proposing new human-AI trust models for educational AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates human-like and system-like trust in AI chatbots
Uses partial least squares structural equation modeling
Proposes distinct human-AI trust model for education
🔎 Similar Papers
No similar papers found.