🤖 AI Summary
A reliable, accessible evaluation framework is lacking for assessing AI tutoring models’ pedagogical competence in open-ended question scenarios.
Method: We introduce the first open-source benchmark for conversational teaching, grounded in learning sciences theory. It features a multidimensional pedagogical competency framework—covering questioning, feedback, and cognitive scaffolding—and a high-discriminative quality reward model trained on expert-annotated data. We further propose multi-granularity dialogue evaluation metrics.
Contribution/Results: Evaluating over 30 state-of-the-art models, we empirically reveal a significant trade-off between domain problem-solving ability and pedagogical effectiveness, and demonstrate that long-horizon dialogues substantially increase pedagogical difficulty. The benchmark—including code, annotated datasets, and a live leaderboard—is publicly released to advance reproducible, interpretable, and theory-informed evaluation of AI teaching assistants.
📝 Abstract
Evaluating the pedagogical capabilities of AI-based tutoring models is critical for making guided progress in the field. Yet, we lack a reliable, easy-to-use, and simple-to-run evaluation that reflects the pedagogical abilities of models. To fill this gap, we present MathTutorBench, an open-source benchmark for holistic tutoring model evaluation. MathTutorBench contains a collection of datasets and metrics that broadly cover tutor abilities as defined by learning sciences research in dialog-based teaching. To score the pedagogical quality of open-ended teacher responses, we train a reward model and show it can discriminate expert from novice teacher responses with high accuracy. We evaluate a wide set of closed- and open-weight models on MathTutorBench and find that subject expertise, indicated by solving ability, does not immediately translate to good teaching. Rather, pedagogy and subject expertise appear to form a trade-off that is navigated by the degree of tutoring specialization of the model. Furthermore, tutoring appears to become more challenging in longer dialogs, where simpler questioning strategies begin to fail. We release the benchmark, code, and leaderboard openly to enable rapid benchmarking of future models.