🤖 AI Summary
This work addresses the problem of accurately estimating model accuracy on unlabeled, out-of-distribution (OOD) datasets under distribution shift—without relying on brittle softmax probabilities or costly, domain-sensitive data similarity metrics. To this end, we propose ALSA, a novel framework that introduces learnable anchors in the logit space for the first time, integrating influence function modeling with logit distribution aggregation analysis to directly capture the intrinsic relationship between logit distributions and model performance—thereby preserving richer predictive information. Crucially, ALSA operates directly on logits, bypassing softmax transformation, which significantly enhances robustness and cross-domain generalization. Evaluated across 12 benchmarks spanning vision, language, and graph domains under diverse distribution shifts—including severe shifts—ALSA consistently outperforms state-of-the-art softmax- and similarity-based methods, achieving an average error reduction of 37.2%.
📝 Abstract
Estimating model accuracy on unseen, unlabeled datasets is crucial for real-world machine learning applications, especially under distribution shifts that can degrade performance. Existing methods often rely on predicted class probabilities (softmax scores) or data similarity metrics. While softmax-based approaches benefit from representing predictions on the standard simplex, compressing logits into probabilities leads to information loss. Meanwhile, similarity-based methods can be computationally expensive and domain-specific, limiting their broader applicability. In this paper, we introduce ALSA (Anchors in Logit Space for Accuracy estimation), a novel framework that preserves richer information by operating directly in the logit space. Building on theoretical insights and empirical observations, we demonstrate that the aggregation and distribution of logits exhibit a strong correlation with the predictive performance of the model. To exploit this property, ALSA employs an anchor-based modeling strategy: multiple learnable anchors are initialized in logit space, each assigned an influence function that captures subtle variations in the logits. This allows ALSA to provide robust and accurate performance estimates across a wide range of distribution shifts. Extensive experiments on vision, language, and graph benchmarks demonstrate ALSA's superiority over both softmax- and similarity-based baselines. Notably, ALSA's robustness under significant distribution shifts highlights its potential as a practical tool for reliable model evaluation.