Confidence and Dispersity as Signals: Unsupervised Model Evaluation and Ranking

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of evaluating and ranking model generalization under distribution shift in the absence of test-set labels—covering both dataset-centric (evaluating a single model across multiple test sets) and model-centric (ranking multiple models on a single test set) deployment scenarios. We propose a hybrid unsupervised evaluation metric that jointly leverages prediction confidence and inter-class dispersion, and introduce the nuclear norm as a novel, efficient, and robust unified measure computed directly from the model’s output probability distributions. Unlike prior approaches, our method requires no ground-truth labels and imposes no architectural assumptions. Extensive experiments demonstrate that it consistently outperforms confidence-only or dispersion-only baselines across diverse settings—including multi-task learning, various distribution shifts, class imbalance, and real-world datasets—achieving superior generalizability and practical utility.

Technology Category

Application Category

📝 Abstract
Assessing model generalization under distribution shift is essential for real-world deployment, particularly when labeled test data is unavailable. This paper presents a unified and practical framework for unsupervised model evaluation and ranking in two common deployment settings: (1) estimating the accuracy of a fixed model on multiple unlabeled test sets (dataset-centric evaluation), and (2) ranking a set of candidate models on a single unlabeled test set (model-centric evaluation). We demonstrate that two intrinsic properties of model predictions, namely confidence (which reflects prediction certainty) and dispersity (which captures the diversity of predicted classes), together provide strong and complementary signals for generalization. We systematically benchmark a set of confidence-based, dispersity-based, and hybrid metrics across a wide range of model architectures, datasets, and distribution shift types. Our results show that hybrid metrics consistently outperform single-aspect metrics on both dataset-centric and model-centric evaluation settings. In particular, the nuclear norm of the prediction matrix provides robust and accurate performance across tasks, including real-world datasets, and maintains reliability under moderate class imbalance. These findings offer a practical and generalizable basis for unsupervised model assessment in deployment scenarios.
Problem

Research questions and friction points this paper is trying to address.

Evaluating model generalization without labeled test data under distribution shifts
Ranking models and datasets using confidence and dispersity prediction signals
Developing hybrid metrics for robust unsupervised model performance assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining confidence and dispersity for model evaluation
Hybrid metrics outperform single-aspect evaluation methods
Nuclear norm of prediction matrix provides robust performance
🔎 Similar Papers
No similar papers found.
Weijian Deng
Weijian Deng
ANU
3D Vision3D Modeling & GenerationGeneralization
W
Weijie Tu
School of Computing, The Australian National University, Canberra, ACT 0200, Australia
I
Ibrahim Radwan
University of Canberra
Mohammad Abu Alsheikh
Mohammad Abu Alsheikh
Associate Professor, University of Canberra
Privacy preservationInternet of thingsdata privacy
Stephen Gould
Stephen Gould
Professor at Australian National University
Artificial IntelligenceComputer VisionOptimizationMachine LearningRobotics
L
Liang Zheng
School of Computing, The Australian National University, Canberra, ACT 0200, Australia