Practice Less, Explain More: LLM-Supported Self-Explanation Improves Explanation Quality on Transfer Problems in Calculus

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how to enhance the quality of students’ explanations on transfer problems in calculus—particularly those characterized by insufficient information—within constrained practice time. Three learning conditions were implemented: no self-explanation, menu-based self-explanation, and open-ended self-explanation augmented with real-time feedback generated by a large language model (LLM). The integration of LLM-driven feedback into the open-ended self-explanation process represents a novel methodological contribution. Through a controlled experiment and quantitative analysis, results demonstrate that the LLM-supported condition significantly improved explanation quality on insufficient-information transfer problems (+11.9%, p = .030) and showed marginally significant gains on overall open-ended problem performance (+7.3%, p = .057), while requiring fewer practice items.
📝 Abstract
We conducted a between-subjects experiment (N=92) comparing three conditions in a calculus learning environment: no self-explanation (control), menu-based self-explanation, and open-ended self-explanation with LLM-generated feedback. All conditions showed positive learning gains within a fixed 60-minute practice session, with no significant between-condition differences in post-test performance. On transfer questions, the open-ended condition produced significantly higher-quality explanations than control on "Not Enough Information" (NEI) problems ($β$=+11.9 percentage points, $p$=.030), though the corresponding NEI multiple-choice accuracy advantage was not significant ($p$=.183). Moreover, across all post-test open-ended explanations, the open-ended condition showed a marginally significant advantage ($β$=+7.3%, $p$=.057). These findings suggest that LLM-supported open-ended self-explanation can improve explanation quality on NEI transfer problems, with weaker evidence across broader transfer explanation measures. Notably, these effects emerged even though learners in the open-ended condition completed substantially fewer practice problems within the same practice time.
Problem

Research questions and friction points this paper is trying to address.

self-explanation
transfer problems
calculus
explanation quality
Not Enough Information
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-supported self-explanation
open-ended explanation
transfer problems
calculus learning
explanation quality
🔎 Similar Papers
No similar papers found.
Eason Chen
Eason Chen
Human-Computer Interaction Institute, Carnegie Mellon University
Learning SciencesEducation TechnologiesLearning AnalyticsBlockchain
X
Xinyi Tang
Carnegie Mellon University, Pittsburgh, PA, USA
Y
Yvonne Zhao
Carnegie Mellon University, Pittsburgh, PA, USA
M
Meiyi Chen
Carnegie Mellon University, Pittsburgh, PA, USA
M
Meryam Elmir
Carnegie Mellon University, Pittsburgh, PA, USA
E
Elizabeth McLaughlin
Carnegie Mellon University, Pittsburgh, PA, USA
M
Mingyu Yuan
Carnegie Mellon University, Pittsburgh, PA, USA
Y
Yumo Wang
Carnegie Mellon University, Pittsburgh, PA, USA
S
Shyam Agarwal
Carnegie Mellon University, Pittsburgh, PA, USA
J
Jared Cochrane
Carnegie Mellon University, Pittsburgh, PA, USA
Jionghao Lin
Jionghao Lin
University of Hong Kong | Carnegie Mellon University | Monash University
Artificial Intelligence in EducationLearning AnalyticsHuman-Centered AIFeedbackDiscourse
T
Tongshuang Wu
Carnegie Mellon University, Pittsburgh, PA, USA
Ken Koedinger
Ken Koedinger
HCII, Carnegie Mellon University
Educational Data MiningArtificial Intelligence in EducationLearning EngineeringIntelligent Tutoring Systems