π€ AI Summary
This study investigates how learnersβ attributions of feedback source (AI versus human) and their perceived credibility influence learning behaviors and outcomes in computing education. Through three randomized controlled experiments, participants received identical feedback generated by the same large language model but attributed to different sources, while controlling for feedback timing (immediate versus delayed). This design enabled the first empirical isolation of the independent effects of source attribution and temporal delay. Results indicate that feedback attributed to humans significantly increased time-on-task, while delayed feedback enhanced code complexity. Notably, when AI attribution was perceived as untrustworthy, learning outcomes were worse than when AI feedback was transparently labeled as such. These findings highlight the critical moderating role of attribution credibility in shaping learning efficacy and offer empirical guidance for the design of intelligent educational systems.
π Abstract
As AI systems increasingly take on instructional roles - providing feedback, guiding practice, evaluating work - a fundamental question emerges: does it matter to learners who they believe is on the other side? We investigated this using a three-condition experiment (N=148) in which participants completed a creative coding tutorial and received feedback generated by the same large language model, attributed to either an AI system (with instant or delayed delivery) or a human teaching assistant (with matched delayed delivery). This three-condition design separates the effect of source attribution from the confound of delivery timing, which prior studies have not controlled. Source attribution and timing had distinct effects on different outcomes: participants who believed the human attribution spent more time on task than those receiving equivalently timed AI-attributed feedback (d=0.61, p=.013, uncorrected), while the delivery delay independently increased output complexity without affecting time measures. An exploratory analysis revealed that 46% of participants in the human-attributed condition did not believe the attribution, and these participants showed worse outcomes than those receiving transparent AI feedback (code complexity d=0.77, p=.003; time on task d=0.70, p=.007). These findings suggest that believed human presence may carry motivational value, but that this value depends on credibility. For computing educators, transparent AI attribution may be the lower-risk default in contexts where human attribution would not be credible.