🤖 AI Summary
This work addresses the lack of robust quantitative metrics for multi-calibration. We propose a novel nonparametric measure based on the Kuiper statistic—the first extension of the Kuiper test to joint calibration assessment across multiple subpopulations. To align each subgroup’s contribution with its discernibility, we introduce a signal-to-noise ratio (SNR)-weighted mechanism, substantially enhancing sensitivity to calibration deviations and statistical stability. Unlike conventional binning or kernel density estimation approaches, our metric is free from hyperparameter tuning and distributional assumptions. Experiments on benchmark datasets demonstrate its ability to precisely detect fine-grained calibration imbalances. Ablation studies confirm that SNR weighting effectively suppresses noise and improves measurement robustness.
📝 Abstract
A suitable scalar metric can help measure multi-calibration, defined as follows. When the expected values of observed responses are equal to corresponding predicted probabilities, the probabilistic predictions are known as"perfectly calibrated."When the predicted probabilities are perfectly calibrated simultaneously across several subpopulations, the probabilistic predictions are known as"perfectly multi-calibrated."In practice, predicted probabilities are seldom perfectly multi-calibrated, so a statistic measuring the distance from perfect multi-calibration is informative. A recently proposed metric for calibration, based on the classical Kuiper statistic, is a natural basis for a new metric of multi-calibration and avoids well-known problems of metrics based on binning or kernel density estimation. The newly proposed metric weights the contributions of different subpopulations in proportion to their signal-to-noise ratios; data analyses' ablations demonstrate that the metric becomes noisy when omitting the signal-to-noise ratios from the metric. Numerical examples on benchmark data sets illustrate the new metric.