🤖 AI Summary
This study investigates undergraduate students’ interaction behaviors with ChatGPT-4 during STEM course assessments and the underlying mechanisms of AI dependence. Method: Drawing on 315 authentic student–AI dialogues from classroom settings, we employed a mixed-methods approach—qualitative thematic analysis coupled with a four-stage dependency classification model—to examine how perceived competence, task relevance, and adoption behavior jointly influence answer accuracy. Contribution/Results: We identify low overall dependence and poor usage efficiency, yet find that specific interaction strategies—e.g., prompt rephrasing and feedback verification—robustly predict dependency levels. Critically, we introduce the novel construct of “negative dependence inertia”: initial query failure significantly amplifies subsequent strategic rigidity. The study establishes a dynamic adoption theory framework for generative AI in education, offering empirical foundations for refining AI pedagogical scaffolding and human–AI collaborative interface design.
📝 Abstract
This study explores how college students interact with generative AI (ChatGPT-4) during educational quizzes, focusing on reliance and predictors of AI adoption. Conducted at the early stages of ChatGPT implementation, when students had limited familiarity with the tool, this field study analyzed 315 student-AI conversations during a brief, quiz-based scenario across various STEM courses. A novel four-stage reliance taxonomy was introduced to capture students' reliance patterns, distinguishing AI competence, relevance, adoption, and students' final answer correctness. Three findings emerged. First, students exhibited overall low reliance on AI and many of them could not effectively use AI for learning. Second, negative reliance patterns often persisted across interactions, highlighting students' difficulty in effectively shifting strategies after unsuccessful initial experiences. Third, certain behavioral metrics strongly predicted AI reliance, highlighting potential behavioral mechanisms to explain AI adoption. The study's findings underline critical implications for ethical AI integration in education and the broader field. It emphasizes the need for enhanced onboarding processes to improve student's familiarity and effective use of AI tools. Furthermore, AI interfaces should be designed with reliance-calibration mechanisms to enhance appropriate reliance. Ultimately, this research advances understanding of AI reliance dynamics, providing foundational insights for ethically sound and cognitively enriching AI practices.