To Layer or Not to Layer? Evaluating the Effects and Mechanisms of LLM-Generated Feedback on learning performance

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether scaffolded feedback—providing encouragement and hints before revealing the correct answer—genuinely enhances learning, and elucidates its underlying mechanisms on learners’ behavioral, cognitive, and affective dimensions. Leveraging large language models to generate personalized feedback, the research employs a randomized controlled trial combined with mediation analysis to systematically compare scaffolded and non-scaffolded feedback across multiple outcome measures, including learning gains, engagement, and affective perceptions. Findings reveal that while scaffolded feedback increases behavioral engagement and perceived autonomy, it also induces higher cognitive load, which paradoxically leads to significantly lower learning performance. This work thus uncovers, for the first time, an intrinsic trade-off between subjective learning experience and objective learning outcomes.
📝 Abstract
Feedback is vital for learning, yet its effectiveness depends not only on its content but also on how it engages students in the learning process. Large Language Models (LLMs) offer novel opportunities to efficiently generate rich, formative feedback, ranging from direct explanations to incrementally layered scaffolding designed to foster learner autonomy. Despite these affordances, it remains unclear whether layered feedback (which sequences encouragement and prompts prior to revealing the correct answer) actually improves engagement and learning outcomes. To address this, we randomly assigned 199 participants to receive either layered or non-layered LLM-generated feedback. We assessed its impact on learning performance, behavioral and cognitive engagement, and affective perceptions, to determine how these factors mediate learning performance. Results indicate that layered feedback elicited slightly higher behavioral engagement and, as anticipated, was perceived as more encouraging and supportive of independence. However, it concurrently induced greater mental effort. Mediation analyses revealed a positive affective pathway driven by perceived encouragement, which was counteracted by a negative behavioral pathway linked to the average number of tasks requiring $\geq 3$ submissions; the cognitive pathway (mental effort) was non-significant. Taken together, layered feedback resulted in significantly poorer learning outcomes compared to non-layered feedback. These findings illuminate a critical trade-off: while layered scaffolding enhances engagement and positive perceptions, it can detrimentally impact actual learning performance. This study contributes nuanced insights for the design of automated, LLM-driven feedback systems by integrating outcome, perception, and mechanism-level analyses.
Problem

Research questions and friction points this paper is trying to address.

layered feedback
LLM-generated feedback
learning performance
learner engagement
scaffolding
Innovation

Methods, ideas, or system contributions that make the work stand out.

layered feedback
Large Language Models
learning performance
mediation analysis
cognitive engagement
🔎 Similar Papers
No similar papers found.