On Uniform, Bayesian, and PAC-Bayesian Deep Ensembles

πŸ“… 2024-06-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited generalization improvement and uncontrollable error cancellation in Bayesian neural network ensembles for classification, stemming from neglecting inter-model error correlations. We propose a weighted ensemble method optimized via a second-order PAC-Bayes bound. Its core innovation is the first explicit incorporation of error correlation into a PAC-Bayesian weighting framework, coupled with a tandem loss for robust fusion. We provide a rigorous, non-vacuous generalization error bound grounded in PAC-Bayes theory. Empirically, our method significantly outperforms standard Bayesian ensembling in both accuracy and generalization calibration, while matching or exceeding uniformly weighted ensembles. Moreover, it enables safe fusion of multiple checkpoint models from a single training runβ€”ensuring both theoretical guarantees and practical applicability.

Technology Category

Application Category

πŸ“ Abstract
It is common practice to combine deep neural networks into ensembles. These deep ensembles can benefit from the cancellation of errors effect: Errors by ensemble members may average out, leading to better generalization performance than each individual network. Bayesian neural networks learn a posterior distribution over model parameters, and sampling and weighting networks according to this posterior yields an ensemble model referred to as a Bayes ensemble. This study reviews the argument that neither the sampling nor the weighting in Bayes ensembles are particularly well suited for increasing generalization performance, as they do not support the cancellation of errors effect. In contrast, we show that a weighted average of models, where the weights are optimized by minimizing a second-order PAC-Bayesian generalization bound, can improve generalization. It is crucial that the optimization takes correlations between models into account. This can be achieved by minimizing the tandem loss, which requires hold-out data for estimating error correlations. The tandem loss based PAC-Bayesian weighting increases robustness against correlated models and models with lower performance in an ensemble. This allows us to safely add several models from the same learning process to an ensemble, instead of using early-stopping for selecting a single weight configuration. Our experiments provide further evidence that state-of-the-art intricate Bayes ensembles do not outperform simple uniformly weighted deep ensembles in terms of classification accuracy. Additionally, we show that these Bayes ensembles cannot match the performance of deep ensembles weighted by optimizing the tandem loss, which additionally provides nonvacuous rigorous generalization guarantees.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning Model Optimization
Bayesian Neural Networks Ensemble
Classification Accuracy Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weight Adjustment
Deep Ensemble Optimization
Reliable Generalization
πŸ”Ž Similar Papers
No similar papers found.