🤖 AI Summary
This paper addresses the problem of evaluating and ranking model generalization under distribution shift in the absence of test-set labels—covering both dataset-centric (evaluating a single model across multiple test sets) and model-centric (ranking multiple models on a single test set) deployment scenarios. We propose a hybrid unsupervised evaluation metric that jointly leverages prediction confidence and inter-class dispersion, and introduce the nuclear norm as a novel, efficient, and robust unified measure computed directly from the model’s output probability distributions. Unlike prior approaches, our method requires no ground-truth labels and imposes no architectural assumptions. Extensive experiments demonstrate that it consistently outperforms confidence-only or dispersion-only baselines across diverse settings—including multi-task learning, various distribution shifts, class imbalance, and real-world datasets—achieving superior generalizability and practical utility.
📝 Abstract
Assessing model generalization under distribution shift is essential for real-world deployment, particularly when labeled test data is unavailable. This paper presents a unified and practical framework for unsupervised model evaluation and ranking in two common deployment settings: (1) estimating the accuracy of a fixed model on multiple unlabeled test sets (dataset-centric evaluation), and (2) ranking a set of candidate models on a single unlabeled test set (model-centric evaluation). We demonstrate that two intrinsic properties of model predictions, namely confidence (which reflects prediction certainty) and dispersity (which captures the diversity of predicted classes), together provide strong and complementary signals for generalization. We systematically benchmark a set of confidence-based, dispersity-based, and hybrid metrics across a wide range of model architectures, datasets, and distribution shift types. Our results show that hybrid metrics consistently outperform single-aspect metrics on both dataset-centric and model-centric evaluation settings. In particular, the nuclear norm of the prediction matrix provides robust and accurate performance across tasks, including real-world datasets, and maintains reliability under moderate class imbalance. These findings offer a practical and generalizable basis for unsupervised model assessment in deployment scenarios.