ALSA: Anchors in Logit Space for Out-of-Distribution Accuracy Estimation

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of accurately estimating model accuracy on unlabeled, out-of-distribution (OOD) datasets under distribution shift—without relying on brittle softmax probabilities or costly, domain-sensitive data similarity metrics. To this end, we propose ALSA, a novel framework that introduces learnable anchors in the logit space for the first time, integrating influence function modeling with logit distribution aggregation analysis to directly capture the intrinsic relationship between logit distributions and model performance—thereby preserving richer predictive information. Crucially, ALSA operates directly on logits, bypassing softmax transformation, which significantly enhances robustness and cross-domain generalization. Evaluated across 12 benchmarks spanning vision, language, and graph domains under diverse distribution shifts—including severe shifts—ALSA consistently outperforms state-of-the-art softmax- and similarity-based methods, achieving an average error reduction of 37.2%.

Technology Category

Application Category

📝 Abstract
Estimating model accuracy on unseen, unlabeled datasets is crucial for real-world machine learning applications, especially under distribution shifts that can degrade performance. Existing methods often rely on predicted class probabilities (softmax scores) or data similarity metrics. While softmax-based approaches benefit from representing predictions on the standard simplex, compressing logits into probabilities leads to information loss. Meanwhile, similarity-based methods can be computationally expensive and domain-specific, limiting their broader applicability. In this paper, we introduce ALSA (Anchors in Logit Space for Accuracy estimation), a novel framework that preserves richer information by operating directly in the logit space. Building on theoretical insights and empirical observations, we demonstrate that the aggregation and distribution of logits exhibit a strong correlation with the predictive performance of the model. To exploit this property, ALSA employs an anchor-based modeling strategy: multiple learnable anchors are initialized in logit space, each assigned an influence function that captures subtle variations in the logits. This allows ALSA to provide robust and accurate performance estimates across a wide range of distribution shifts. Extensive experiments on vision, language, and graph benchmarks demonstrate ALSA's superiority over both softmax- and similarity-based baselines. Notably, ALSA's robustness under significant distribution shifts highlights its potential as a practical tool for reliable model evaluation.
Problem

Research questions and friction points this paper is trying to address.

Estimating model accuracy on unseen unlabeled datasets
Addressing performance degradation under distribution shifts
Overcoming information loss from softmax compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Operates directly in logit space
Uses learnable anchors with influence functions
Exploits logit distribution correlation with performance
🔎 Similar Papers
No similar papers found.
C
Chenzhi Liu
School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
Mahsa Baktashmotlagh
Mahsa Baktashmotlagh
University of Queensland
Machine LearningComputer Vision
Y
Yanran Tang
School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
Zi Huang
Zi Huang
PhD Candidate
Deep Learning
Ruihong Qiu
Ruihong Qiu
ARC DECRA Fellow, Lecturer (Assistant Professor) @The University of Queensland
GraphLarge Language Models