🤖 AI Summary
Educational AI is rapidly proliferating, yet understanding of its psychological and social impacts on learners remains limited, particularly regarding trust, dependency, and anthropomorphic interactions. Addressing these gaps, this study integrates perspectives from automation psychology, human factors engineering, human-computer interaction, and philosophy of technology to construct a comprehensive theoretical framework. Through large-scale analysis of 104,984 YouTube comments and cross-case comparisons between AI-generated philosophical debates and human-led engineering tutorials, the research systematically uncovers three key phenomena: the “trust paradox,” the “double-edged sword” of anthropomorphism, and “automation irony.” The findings advocate for differentiated human-AI collaboration strategies: AI should be confined to foundational knowledge instruction, while higher-order competencies—such as design thinking and ethical reasoning—must remain under human guidance to mitigate risks of skill atrophy and cognitive monitoring burdens.
📝 Abstract
As AI tutors enter classrooms at unprecedented speed, their deployment increasingly outpaces our grasp of the psychological and social consequences of such technology. Yet decades of research in automation psychology, human factors, and human-computer interaction provide crucial insights that remain underutilized in educational AI design. This work synthesizes four research traditions -- automation psychology, human factors engineering, HCI, and philosophy of technology -- to establish a comprehensive framework for understanding how learners psychologically relate to anthropomorphic AI tutors. We identify three persistent challenges intensified by Generative AI's conversational fluency. First, learners exhibit dual trust calibration failures -- automation bias (uncritical acceptance) and algorithm aversion (excessive rejection after errors) -- with an expertise paradox where novices overrely while experts underrely. Second, while anthropomorphic design enhances engagement, it can distract from learning and foster harmful emotional attachment. Third, automation ironies persist: systems meant to aid cognition introduce designer errors, degrade skills through disuse, and create monitoring burdens humans perform poorly. We ground this theoretical synthesis through comparative analysis of over 104,984 YouTube comments across AI-generated philosophical debates and human-created engineering tutorials, revealing domain-dependent trust patterns and strong anthropomorphic projection despite minimal cues. For engineering education, our synthesis mandates differentiated approaches: AI tutoring for technical foundations where automation bias is manageable through proper scaffolding, but human facilitation for design, ethics, and professional judgment where tacit knowledge transmission proves irreplaceable.