TutorBench: A Benchmark To Assess Tutoring Capabilities Of Large Language Models

๐Ÿ“… 2025-10-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study addresses the lack of systematic evaluation of large language modelsโ€™ (LLMs) pedagogical capabilities. We introduce TutorBench, the first benchmark specifically designed for AI tutoring, comprising 1,490 expert-crafted high school and Advanced Placement (AP) curriculum samples. It targets three core tutoring tasks: adaptive explanation generation, actionable feedback provision, and proactive learning prompt synthesis. We propose fine-grained, task-specific scoring criteria, integrating LLM-based automated evaluation (โ€œLLM-as-judgeโ€) with human-defined rules to ensure assessment reliability. Comprehensive evaluation of 16 state-of-the-art models reveals a top score of only 55.8%, with pass rates for critical tutoring competencies all below 60%, highlighting persistent deficits in learning diagnosis and personalized instructional support. TutorBench fills a critical gap in quantitatively assessing AI tutor capabilities and provides a standardized, reproducible evaluation framework to guide the development and iterative improvement of educational foundation models.

Technology Category

Application Category

๐Ÿ“ Abstract
As students increasingly adopt large language models (LLMs) as learning aids, it is crucial to build models that are adept at handling the nuances of tutoring: they need to identify the core needs of students, be adaptive, provide personalized guidance, and be accurate. To this end, we introduce TutorBench, a dataset and evaluation benchmark designed to rigorously evaluate the core tutoring skills of LLMs. The dataset comprises 1,490 samples curated by human experts, focused on high-school and AP-level curricula. The samples are drawn from three common tutoring tasks: (i) generating adaptive explanations tailored to a student's confusion, (ii) providing actionable feedback on a student's work, and (iii) promoting active learning through effective hint generation. To account for the inherent complexity of tutoring, samples are accompanied by sample-specific rubrics which are used to judge model responses during evaluation. TutorBench uses a reliable and fine-grained automatic evaluation method that uses an LLM-judge and the sample-specific rubrics. We evaluate 16 frontier LLMs on TutorBench and present a detailed analysis of their performance and behavior. Our results show that none of the frontier LLMs achieve a score of greater than $56%$, showing a large room for improvement. We find that LLMs fall short in exhibiting the full range of tutoring skills needed to guide, diagnose, and support students effectively, with all the frontier models achieving less than a $60%$ pass rate on rubric criteria related to these skills. We also find that different model families exhibit varied strengths and limitations: the Claude models outperform others in supporting active learning, while they lag behind in the other two use cases. By releasing TutorBench, we provide a comprehensive and unsaturated benchmark to guide the development of the next-generation of AI tutors.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' tutoring skills through adaptive explanations
Evaluating models' ability to provide actionable student feedback
Measuring effectiveness of hint generation for active learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset with expert-curated samples for tutoring evaluation
Automatic evaluation using LLM-judge with sample-specific rubrics
Benchmark assessing three core tutoring skills of models
๐Ÿ”Ž Similar Papers
No similar papers found.
R
Rakshith S Srinivasa
Scale AI
Zora Che
Zora Che
University of Maryland
C
Chen Bo Calvin Zhang
Scale AI
D
Diego Mares
Scale AI
E
Ernesto Hernandez
Scale AI
J
Jayeon Park
Scale AI
Dean Lee
Dean Lee
Facility for Rare Isotope Beams and Department of Physics and Astronomy, Michigan State University
lattice effective field theorynuclear structurenuclear reactionsfew- and many-body systemscold atoms
G
Guillermo Mangialardi
Scale AI
C
Charmaine Ng
Scale AI
E
Ed-Yeremai Hernandez Cardona
Scale AI
A
Anisha Gunjal
Scale AI
Yunzhong He
Yunzhong He
University of California, Los Angeles
machine learningnatural language processinginformation retrievalrobot learning
B
Bing Liu
Scale AI
C
Chen Xing
Scale AI