Bound to Disagree : Generalization Bounds via Certifiable Surrogates

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generalization bounds in deep learning are often vacuous, incomputable, or restricted to specific model architectures. This work proposes a general and verifiable generalization bound by constructing a surrogate model that estimates the true risk through prediction disagreement on unlabeled data, without requiring any modification to the original model or training procedure. The approach unifies sample compression, model compression, and PAC-Bayes theory, yielding a framework applicable to arbitrary predictors and compatible with multiple theoretical paradigms. Empirical evaluations demonstrate that the resulting bounds are both tight and effective, providing reliable certificates of generalization performance across diverse models.

Technology Category

Application Category

📝 Abstract
Generalization bounds for deep learning models are typically vacuous, not computable or restricted to specific model classes. In this paper, we tackle these issues by providing new disagreement-based certificates for the gap between the true risk of any two predictors. We then bound the true risk of the predictor of interest via a surrogate model that enjoys tight generalization guarantees, and evaluating our disagreement bound on an unlabeled dataset. We empirically demonstrate the tightness of the obtained certificates and showcase the versatility of the approach by training surrogate models leveraging three different frameworks: sample compression, model compression and PAC-Bayes theory. Importantly, such guarantees are achieved without modifying the target model, nor adapting the training procedure to the generalization framework.
Problem

Research questions and friction points this paper is trying to address.

generalization bounds
deep learning
vacuous bounds
model-agnostic guarantees
true risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

disagreement-based bounds
certifiable surrogates
generalization guarantees
PAC-Bayes
model compression
🔎 Similar Papers
No similar papers found.
M
Mathieu Bazinet
Département d’informatique et génie logiciel, Université Laval, Québec, Qc, Canada
Valentina Zantedeschi
Valentina Zantedeschi
ServiceNow, Laval University
Machine Learning
Pascal Germain
Pascal Germain
Associate Professor, Université Laval
Machine Learning