π€ AI Summary
This work addresses the inaccuracy of Area Under the Risk-Coverage curve (AURC) estimation for selective classifiers (SCs) in safety-critical applications under few-shot settings. We propose the first rigorous population-level formalization of AURC and equivalently characterize it as a weighted risk function. Building upon this, we design a Monte Carlo plugin estimator, for which we theoretically establish a convergence rate of $O(sqrt{ln n / n})$, along with low bias and a tight mean-squared error bound. Extensive experiments across multiple datasets, model architectures, and confidence scoring functions demonstrate the estimatorβs consistency and effectiveness. Our approach significantly enhances the statistical reliability and trustworthiness of both AURC evaluation and optimization for selective classification in resource-constrained, high-stakes scenarios.
π Abstract
The selective classifier (SC) has been proposed for rank based uncertainty thresholding, which could have applications in safety critical areas such as medical diagnostics, autonomous driving, and the justice system. The Area Under the Risk-Coverage Curve (AURC) has emerged as the foremost evaluation metric for assessing the performance of SC systems. In this work, we present a formal statistical formulation of population AURC, presenting an equivalent expression that can be interpreted as a reweighted risk function. Through Monte Carlo methods, we derive empirical AURC plug-in estimators for finite sample scenarios. The weight estimators associated with these plug-in estimators are shown to be consistent, with low bias and tightly bounded mean squared error (MSE). The plug-in estimators are proven to converge at a rate of $mathcal{O}(sqrt{ln(n)/n})$ demonstrating statistical consistency. We empirically validate the effectiveness of our estimators through experiments across multiple datasets, model architectures, and confidence score functions (CSFs), demonstrating consistency and effectiveness in fine-tuning AURC performance.