Auto-grader Feedback Utilization and Its Impacts: An Observational Study Across Five Community Colleges

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior research often conflates feedback provision with feedback effectiveness, overlooking how students’ actual engagement with automated feedback mediates learning outcomes. Method: Drawing on observational data from introductory Python courses across five U.S. community colleges, we integrated submission logs and fine-grained feedback access records to empirically distinguish feedback users from non-users. Contribution/Results: We find that students who frequently accessed auto-grader feedback achieved significantly higher assignment scores; moreover, submissions following feedback review showed substantially greater performance gains than those ignoring feedback. This study is the first to demonstrate—using behavioral usage data—that feedback utilization, not merely its availability, serves as a critical mediating variable in programming learning. Our findings challenge the implicit assumption that feedback delivery equates to feedback efficacy, providing robust empirical evidence to inform both the educational evaluation of auto-grading systems and the design of more effective, behaviorally grounded feedback mechanisms.

Technology Category

Application Category

📝 Abstract
Automated grading systems, or auto-graders, have become ubiquitous in programming education, and the way they generate feedback has become increasingly automated as well. However, there is insufficient evidence regarding auto-grader feedback's effectiveness in improving student learning outcomes, in a way that differentiates students who utilized the feedback and students who did not. In this study, we fill this critical gap. Specifically, we analyze students' interactions with auto-graders in an introductory Python programming course, offered at five community colleges in the United States. Our results show that students checking the feedback more frequently tend to get higher scores from their programming assignments overall. Our results also show that a submission that follows a student checking the feedback tends to receive a higher score than a submission that follows a student ignoring the feedback. Our results provide evidence on auto-grader feedback's effectiveness, encourage their increased utilization, and call for future work to continue their evaluation in this age of automation
Problem

Research questions and friction points this paper is trying to address.

Assess auto-grader feedback's impact on student learning outcomes
Compare feedback utilization versus non-utilization in programming courses
Evaluate feedback effectiveness across five community colleges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes auto-grader feedback usage in Python courses
Links feedback checks to higher assignment scores
Compares outcomes between feedback users and non-users
🔎 Similar Papers
No similar papers found.
A
Adam Zhang
School of Computer Science, Carnegie Mellon University
Heather Burte
Heather Burte
Lab Director, Carnegie Mellon University
Cognitive PsychologyEducational PsychologySTEM Learning
J
Jaromir Savelka
School of Computer Science, Carnegie Mellon University
C
Christopher Bogart
School of Computer Science, Carnegie Mellon University
Majd Sakr
Majd Sakr
Professor of Computer Science, Carnegie Mellon University
online educationcloud computinghuman-robot interaction