Student Engagement with GenAI's Tutoring Feedback: A Mixed Methods Study

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how students perceive, comprehend, and utilize real-time programming feedback generated by AI, focusing on cognitive engagement (attention, reasoning) and behavioral responses (problem-solving, feedback re-requesting). Using a mixed-methods approach, we collected 380 authentic feedback interactions during Python programming tasks, integrating concurrent think-aloud protocols with eye-tracking. We propose a four-dimensional feedback presentation framework and identify three key moderating factors—comprehensibility, skepticism, and information sufficiency—that shape feedback utilization decisions. Furthermore, we construct an integrated “visual attention–verbalized reasoning–behavioral response” association model. Results indicate that comprehensible feedback significantly enhances code revision, whereas perceived ambiguity or insufficiency triggers proactive follow-up queries. This work provides empirically grounded, interpretable cognitive-behavioral evidence to inform the design of pedagogically effective feedback mechanisms in intelligent tutoring systems.

Technology Category

Application Category

📝 Abstract
How students utilize immediate tutoring feedback in programming education depends on various factors. Among them are the feedback quality, but also students' engagement, i.e., their perception, interpretation, and use of feedback. However, there is limited research on how students engage with various types of tutoring feedback. For this reason, we developed a learning environment that provides students with Python programming tasks and various types of immediate, AI-generated tutoring feedback. The feedback is displayed within four components. Using a mixed-methods approach (think-aloud study and eye-tracking), we conducted a study with 20 undergraduate students enrolled in an introductory programming course. Our research aims to: (1) identify what students think when they engage with the tutoring feedback components, and (2) explore the relations between the tutoring feedback components, students' visual attention, verbalized thoughts, and their immediate actions as part of the problem-solving process. The analysis of students' thoughts while engaging with 380 feedback components revealed four main themes: students express understanding or disagreement, additional information needed, and students explicitly judge the feedback. Exploring the relations between feedback, students' attention, thoughts, and actions showed a clear relationship. While expressions of understanding were associated with improvements, expressions of disagreement or need for additional information prompted students to collect another feedback component rather than act on the current information. These insights into students' engagement and decision-making processes contribute to an increased understanding of tutoring feedback and how students engage with it. Thereby, this work has implications for tool developers and educators facilitating feedback.
Problem

Research questions and friction points this paper is trying to address.

Investigating how students engage with AI-generated tutoring feedback in programming
Exploring relationships between feedback components, visual attention, and student actions
Understanding student decision-making processes when receiving automated feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-generated immediate tutoring feedback for programming
Mixed-methods approach combining think-aloud and eye-tracking
Analysis of student engagement with feedback components
🔎 Similar Papers
No similar papers found.