Leveraging LLMs to Assess Tutor Moves in Real-Life Dialogues: A Feasibility Study

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual coding of pedagogically significant tutor behaviors—such as effective praise, error response, and adherence to evidence-based best practices—in mathematics remote tutoring is costly, labor-intensive, and poorly scalable. Method: We propose an automated assessment framework leveraging large language models (LLMs)—including GPT-4/4o/turbo, Gemini-1.5-pro, and LearnLM—integrated with instruction tuning and multi-turn prompt engineering to enable low-cost, highly reproducible behavioral annotation. Contribution/Results: Evaluated on 50 authentic tutoring transcripts, our framework achieves 94–98% accuracy in praise detection, 82–88% in student error identification, and 73–89% inter-rater agreement with human experts on best-practice adherence. This work provides the first systematic empirical validation of LLMs’ reliability in identifying nuanced, context-dependent instructional behaviors within real-world educational dialogues, establishing a scalable paradigm for large-scale analysis of teaching process data.

Technology Category

Application Category

📝 Abstract
Tutoring improves student achievement, but identifying and studying what tutoring actions are most associated with student learning at scale based on audio transcriptions is an open research problem. This present study investigates the feasibility and scalability of using generative AI to identify and evaluate specific tutor moves in real-life math tutoring. We analyze 50 randomly selected transcripts of college-student remote tutors assisting middle school students in mathematics. Using GPT-4, GPT-4o, GPT-4-turbo, Gemini-1.5-pro, and LearnLM, we assess tutors' application of two tutor skills: delivering effective praise and responding to student math errors. All models reliably detected relevant situations, for example, tutors providing praise to students (94-98% accuracy) and a student making a math error (82-88% accuracy) and effectively evaluated the tutors' adherence to tutoring best practices, aligning closely with human judgments (83-89% and 73-77%, respectively). We propose a cost-effective prompting strategy and discuss practical implications for using large language models to support scalable assessment in authentic settings. This work further contributes LLM prompts to support reproducibility and research in AI-supported learning.
Problem

Research questions and friction points this paper is trying to address.

Assessing tutor moves in real-life math tutoring dialogues
Evaluating feasibility of using generative AI for tutor skill analysis
Developing cost-effective LLM prompts for scalable tutoring assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using GPT-4 and Gemini to evaluate tutor skills
Cost-effective prompting strategy for scalable assessment
LLM prompts for reproducibility in AI-supported learning
🔎 Similar Papers
No similar papers found.