🤖 AI Summary
Existing calibration theory primarily focuses on binary classification, and parametric recalibration methods lack generalization guarantees. Method: We establish, for the first time, a PAC-Bayes-based generalization analysis framework for multiclass calibration error, deriving an optimizable upper bound on calibration generalization error; based on this bound, we propose the first theoretically grounded recalibration algorithm—integrating nonparametric binning estimation with Gaussian process (GP) calibration, where optimization is explicitly guided by the generalization error bound. Contribution/Results: Extensive experiments across multiple benchmark datasets and base models demonstrate that our method significantly improves GP calibration performance, empirically validating both the theoretical soundness and practical efficacy of our generalization-guided approach.
📝 Abstract
Nonparametric estimation with binning is widely employed in the calibration error evaluation and the recalibration of machine learning models. Recently, theoretical analyses of the bias induced by this estimation approach have been actively pursued; however, the understanding of the generalization of the calibration error to unknown data remains limited. In addition, although many recalibration algorithms have been proposed, their generalization performance lacks theoretical guarantees. To address this problem, we conduct a generalization analysis of the calibration error under the probably approximately correct (PAC) Bayes framework. This approach enables us to derive a first optimizable upper bound for the generalization error in the calibration context. We then propose a generalization-aware recalibration algorithm based on our generalization theory. Numerical experiments show that our algorithm improves the Gaussian-process-based recalibration performance on various benchmark datasets and models.