🤖 AI Summary
This study addresses the challenge that large language models (LLMs) struggle to authentically simulate the fragmented, incoherent, and uncertain reasoning characteristic of human novices. The authors propose the first evaluation framework centered on the authenticity of verbalized thought processes, leveraging a dataset of 630 authentic think-aloud protocols from students solving multi-step chemistry problems. They compare reasoning texts generated by GPT-4.1 under minimal and expanded contextual prompts. Findings reveal that LLM-generated reasoning is excessively coherent, verbose, and low in variability—biases that intensify with richer context—and systematically overestimates learner performance. These results expose fundamental limitations in LLMs’ capacity to model genuine cognitive and metacognitive processes exhibited during novice problem-solving.
📝 Abstract
Large language models (LLMs) are increasingly embedded in AI-based tutoring systems. Can they faithfully model novice reasoning and metacognitive judgments? Existing evaluations emphasize problem-solving accuracy, overlooking the fragmented and imperfect reasoning that characterizes human learning. We evaluate LLMs as novices using 630 think-aloud utterances from multi-step chemistry tutoring problems with problem-solving logs of student hint use, attempts, and problem context. We compare LLM-generated reasoning to human learner utterances under minimal and extended contextual prompting, and assess the models'ability to predict step-level learner success. Although GPT-4.1 generates fluent and contextually appropriate continuations, its reasoning is systematically over-coherent, verbose, and less variable than human think-alouds. These effects intensify with a richer problem-solving context during prompting. Learner performance was consistently overestimated. These findings highlight epistemic limitations of simulating learning with LLMs. We attribute these limitations to LLM training data, including expert-like solutions devoid of expressions of affect and working memory constraints during problem solving. Our evaluation framework can guide future design of adaptive systems that more faithfully support novice learning and self-regulation using generative artificial intelligence.