🤖 AI Summary
This study addresses the computational and statistical challenges of estimating calibration error—specifically, how to efficiently and accurately measure the discrepancy between predicted probabilities and true outcomes given either full access to the data distribution or only samples. The work presents the first exact and efficient algorithm for the special case of uniform marginal distributions and noiseless labels. It establishes that the general problem is NP-hard and provides a polynomial-time approximation scheme (PTAS). By introducing techniques for sparsifying both the data distribution and the predictor, the authors develop a novel computational and estimation framework and prove a tight sample complexity lower bound of Θ(1/ε³) for one-sided estimation. While prior methods achieved only O(1/√|X|) approximation, this work enables exact computation under specific assumptions and demonstrates that relaxing any of these ideal conditions immediately renders the problem computationally intractable.
📝 Abstract
The distance from calibration, introduced by Błasiok, Gopalan, Hu, and Nakkiran (STOC 2023), has recently emerged as a central measure of miscalibration for probabilistic predictors. We study the fundamental problems of computing and estimating this quantity, given either an exact description of the data distribution or only sample access to it.
We give an efficient algorithm that exactly computes the calibration distance when the distribution has a uniform marginal and noiseless labels, which improves the $O(1/\sqrt{|\mathcal{X}|})$ additive approximation of Qiao and Zheng (COLT 2024) for this special case. Perhaps surprisingly, the problem becomes $\mathsf{NP}$-hard when either of the two assumptions is removed. We extend our algorithm to a polynomial-time approximation scheme for the general case.
For the estimation problem, we show that $Θ(1/ε^3)$ samples are sufficient and necessary for the empirical calibration distance to be upper bounded by the true distance plus $ε$. In contrast, a polynomial dependence on the domain size -- incurred by the learning-based baseline -- is unavoidable for two-sided estimation.
Our positive results are based on simple sparsifications of both the distribution and the target predictor, which significantly reduce the search space for computation and lead to stronger concentration for the estimation problem. To prove the hardness results, we introduce new techniques for certifying lower bounds on the calibration distance -- a problem that is hard in general due to its $\textsf{co-NP}$-completeness.