Discerning minds or generic tutors? Evaluating instructional guidance capabilities in Socratic LLMs

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack robust capabilities to perceive and respond to learners’ real-time cognitive states—such as confusion or misconception—hindering their effectiveness in interactive, adaptive instruction. Method: We propose a learner-centered conversational assessment paradigm, introducing the GuideEval benchmark and a three-stage behavioral modeling framework. Leveraging authentic educational dialogue data, we design behavior-guided instruction tuning that integrates Socratic questioning with adaptive pedagogical strategy modeling. Contribution/Results: Experiments reveal that baseline LLMs exhibit markedly diminished instructional efficacy in confusion scenarios. In contrast, our tuned models demonstrate substantial improvements in guidance quality, achieving statistically significant gains across multiple cognitive alignment metrics—including explanation relevance, misconception correction, and scaffolding appropriateness. This work establishes a rigorously validated pathway for deploying LLMs in cognitively adaptive, personalized education.

Technology Category

Application Category

📝 Abstract
The conversational capabilities of large language models hold significant promise for enabling scalable and interactive tutoring. While prior research has primarily examined their capacity for Socratic questioning, it often overlooks a critical dimension: adaptively guiding learners based on their cognitive states. This study shifts focus from mere question generation to the broader instructional guidance capability. We ask: Can LLMs emulate expert tutors who dynamically adjust strategies in response to learners' understanding? To investigate this, we propose GuideEval, a benchmark grounded in authentic educational dialogues that evaluates pedagogical guidance through a three-phase behavioral framework: (1) Perception, inferring learner states; (2) Orchestration, adapting instructional strategies; and (3) Elicitation, stimulating proper reflections. Empirical findings reveal that existing LLMs frequently fail to provide effective adaptive scaffolding when learners exhibit confusion or require redirection. Furthermore, we introduce a behavior-guided finetuning strategy that leverages behavior-prompted instructional dialogues, significantly enhancing guidance performance. By shifting the focus from isolated content evaluation to learner-centered interaction, our work advocates a more dialogic paradigm for evaluating Socratic LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluate adaptive guidance in Socratic LLMs for tutoring
Assess LLMs' ability to adjust strategies based on learner states
Enhance pedagogical scaffolding using behavior-guided fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

GuideEval benchmark evaluates pedagogical guidance
Behavior-guided finetuning enhances adaptive scaffolding
Three-phase framework: Perception, Orchestration, Elicitation
🔎 Similar Papers
No similar papers found.
Y
Ying Liu
Beijing Normal University
C
Can Li
Beijing Normal University
T
Ting Zhang
Beijing Normal University
Mei Wang
Mei Wang
Beijing Normal University
face recognitionfairness in AIdomain adaptation
Qiannan Zhu
Qiannan Zhu
School of Artificial Intelligence, Beijing Normal University
knowledge graphrecommendation systeminformation retrieval
J
Jian Li
Beijing Normal University
H
Hua Huang
Beijing Normal University