đ¤ AI Summary
This study investigates how feedback sourceâlarge language model (LLM), expert, or peerâaffects pre-service teachersâ perception and adoption of feedback, moderated by source attribution, identification accuracy, and feedback quality. A randomized experimental design was employed, with perception measured via a five-dimensional Likert-scale instrument and adoption quantified through textual revision behaviors. Regression analyses tested underlying mechanisms. Results indicate that LLM-generated feedback received the highest ratings for fairness and usefulness, with a 52% adoption rate. Feedback quality emerged as the sole significant predictor of adoption. Crucially, when participants misattributed LLM feedback to an expert, their evaluations significantly improvedârevealing a systematic âexpert heuristicâ bias in source attribution. This constitutes the first empirical identification of this cognitive mechanismâs critical role in educational AI feedback contexts.
đ Abstract
Feedback plays a central role in learning, yet pre-service teachers' engagement with feedback depends not only on its quality but also on their perception of the feedback content and source. Large Language Models (LLMs) are increasingly used to provide educational feedback; however, negative perceptions may limit their practical use, and little is known about how pre-service teachers' perceptions and behavioral responses differ by feedback source. This study investigates how the perceived source of feedback - LLM, expert, or peer - influences feedback perception and uptake, and whether recognition accuracy and feedback quality moderate these effects. In a randomized experiment with 273 pre-service teachers, participants received written feedback on a mathematics learning goal, identified its source, rated feedback perceptions across five dimensions (fairness, usefulness, acceptance, willingness to improve, positive and negative affect), and revised the learning goal according to the feedback (i.e. feedback uptake). Results revealed that LLM-generated feedback received the highest ratings in fairness and usefulness, leading to the highest uptake (52%). Recognition accuracy significantly moderated the effect of feedback source on perception, with particularly positive evaluations when LLM feedback was falsely ascribed to experts. Higher-quality feedback was consistently assigned to experts, indicating an expertise heuristic in source judgments. Regression analysis showed that only feedback quality significantly predicted feedback uptake. Findings highlight the need to address source-related biases and promote feedback and AI literacy in teacher education.