🤖 AI Summary
This study investigates whether large language model (LLM)-based tutors can match or surpass human teachers across core pedagogical dimensions. Method: We conducted a text-only, double-blind evaluation involving experienced educators who assessed LLM and human instructors across four key teaching dimensions—engagement, empathy, scaffolding, and conciseness. An interactive mathematics tutoring system was built using state-of-the-art LLMs; evaluations employed expert annotation and rigorous controlled experimental design. Contribution/Results: LLMs significantly outperformed human instructors on all four metrics, with the largest advantage in empathy (80% preference rate). To our knowledge, this is the first study to systematically validate LLMs as scalable, high-fidelity pedagogical assistants using educator-led blind assessment as primary evidence—demonstrating their feasibility and potential for augmenting instruction at scale.
📝 Abstract
The rapid development of Large Language Models (LLMs) opens up the possibility of using them as personal tutors. This has led to the development of several intelligent tutoring systems and learning assistants that use LLMs as back-ends with various degrees of engineering. In this study, we seek to compare human tutors with LLM tutors in terms of engagement, empathy, scaffolding, and conciseness. We ask human tutors to annotate and compare the performance of an LLM tutor with that of a human tutor in teaching grade-school math word problems on these qualities. We find that annotators with teaching experience perceive LLMs as showing higher performance than human tutors in all 4 metrics. The biggest advantage is in empathy, where 80% of our annotators prefer the LLM tutor more often than the human tutors. Our study paints a positive picture of LLMs as tutors and indicates that these models can be used to reduce the load on human teachers in the future.