๐ค AI Summary
Existing natural language generation metrics struggle to effectively evaluate whether AI tutors accurately identify mathematical errors and provide appropriate guidance without directly revealing answers. To address this gap, this work proposes the first quality assessment framework tailored for this pedagogical task. The approach defines a hierarchical teaching dimension based on human preference data from MRBench, synthesizes minimally different response pairs, and employs a weighted ranking strategy to generate high-quality training data. A BradleyโTerry preference model is then trained using a 0.5B-parameter backbone network. Experiments show that the model trained solely on synthetic data achieves 69% accuracy on human preference tests; incorporating weighted data further improves performance to 74%, outperforming larger general-purpose reward models.
๐ Abstract
Evaluating the pedagogical quality of AI tutors remains challenging: standard NLG metrics do not determine whether responses identify mistakes, scaffold reasoning, or avoid revealing the answers. For the task of mistake remediation, we derive a hierarchy of pedagogical aspects from human pairwise preferences on MRBench, and synthesize minimally contrastive response pairs that differ along key aspects (e.g., mistake identification and location, targetedness, scaffolding, actionability, clarity, and coherence). We develop and release Bradley-Terry preference models trained on weighted-sum rankings that we automatically create from MRBench, synthetic pairs, and data combinations. Using only synthetic data, our best model reaches 0.69 pairwise accuracy on a human preference test, and combining weighted-sum data with targeted synthetic groups improves accuracy to 0.74, outperforming larger general-purpose reward models while using only a 0.5B-parameter backbone.