🤖 AI Summary
In scenarios with scarce labeled data, conventional classifier evaluation suffers from high bias and variance. To address this, we propose Semi-Supervised Model Evaluation (SSME), a framework that jointly models a small set of labeled instances and a large pool of unlabeled data by aggregating continuous prediction scores from multiple classifiers. SSME estimates the joint distribution of true labels and predictions, enabling unbiased estimation of standard evaluation metrics—including accuracy, F1, and calibration error—without requiring additional ground-truth annotations. This work introduces two key advances: (i) fine-grained subgroup evaluation under label scarcity, and (ii) principled evaluation of large language model (LLM) outputs, breaking the reliance on fully labeled test sets. Extensive experiments across four domains—healthcare diagnosis, content moderation, molecular property prediction, and image annotation—demonstrate that SSME reduces estimation error by 5.1× over supervised baselines and outperforms the best prior method by 2.4×, while substantially improving robustness for subgroup and LLM evaluations.
📝 Abstract
It remains difficult to evaluate machine learning classifiers in the absence of a large, labeled dataset. While labeled data can be prohibitively expensive or impossible to obtain, unlabeled data is plentiful. Here, we introduce Semi-Supervised Model Evaluation (SSME), a method that uses both labeled and unlabeled data to evaluate machine learning classifiers. SSME is the first evaluation method to take advantage of the fact that: (i) there are frequently multiple classifiers for the same task, (ii) continuous classifier scores are often available for all classes, and (iii) unlabeled data is often far more plentiful than labeled data. The key idea is to use a semi-supervised mixture model to estimate the joint distribution of ground truth labels and classifier predictions. We can then use this model to estimate any metric that is a function of classifier scores and ground truth labels (e.g., accuracy or expected calibration error). We present experiments in four domains where obtaining large labeled datasets is often impractical: (1) healthcare, (2) content moderation, (3) molecular property prediction, and (4) image annotation. Our results demonstrate that SSME estimates performance more accurately than do competing methods, reducing error by 5.1x relative to using labeled data alone and 2.4x relative to the next best competing method. SSME also improves accuracy when evaluating performance across subsets of the test distribution (e.g., specific demographic subgroups) and when evaluating the performance of language models.