🤖 AI Summary
Current large language models (LLMs) lack robust capabilities to perceive and respond to learners’ real-time cognitive states—such as confusion or misconception—hindering their effectiveness in interactive, adaptive instruction.
Method: We propose a learner-centered conversational assessment paradigm, introducing the GuideEval benchmark and a three-stage behavioral modeling framework. Leveraging authentic educational dialogue data, we design behavior-guided instruction tuning that integrates Socratic questioning with adaptive pedagogical strategy modeling.
Contribution/Results: Experiments reveal that baseline LLMs exhibit markedly diminished instructional efficacy in confusion scenarios. In contrast, our tuned models demonstrate substantial improvements in guidance quality, achieving statistically significant gains across multiple cognitive alignment metrics—including explanation relevance, misconception correction, and scaffolding appropriateness. This work establishes a rigorously validated pathway for deploying LLMs in cognitively adaptive, personalized education.
📝 Abstract
The conversational capabilities of large language models hold significant promise for enabling scalable and interactive tutoring. While prior research has primarily examined their capacity for Socratic questioning, it often overlooks a critical dimension: adaptively guiding learners based on their cognitive states. This study shifts focus from mere question generation to the broader instructional guidance capability. We ask: Can LLMs emulate expert tutors who dynamically adjust strategies in response to learners' understanding? To investigate this, we propose GuideEval, a benchmark grounded in authentic educational dialogues that evaluates pedagogical guidance through a three-phase behavioral framework: (1) Perception, inferring learner states; (2) Orchestration, adapting instructional strategies; and (3) Elicitation, stimulating proper reflections. Empirical findings reveal that existing LLMs frequently fail to provide effective adaptive scaffolding when learners exhibit confusion or require redirection. Furthermore, we introduce a behavior-guided finetuning strategy that leverages behavior-prompted instructional dialogues, significantly enhancing guidance performance. By shifting the focus from isolated content evaluation to learner-centered interaction, our work advocates a more dialogic paradigm for evaluating Socratic LLMs.