🤖 AI Summary
Variational Bayesian (VB) inference is widely adopted for its computational efficiency, yet standard variational families—such as mean-field approximations—fail to capture parameter dependencies, underestimate uncertainty, and yield distorted covariance estimates under model misspecification. To address these limitations, we propose Variational Bagging: a framework that integrates the bootstrap aggregating (bagging) principle into VB inference. By performing resampling followed by parallel variational inference while preserving the mean-field structure, our method constructs a posterior ensemble that automatically recovers off-diagonal covariance terms, enabling well-calibrated uncertainty quantification. We theoretically establish that the resulting estimator satisfies the Bernstein–von Mises property and achieves optimal posterior contraction rates. Empirically, Variational Bagging significantly improves the robustness and accuracy of uncertainty estimation across parametric models, mixture models, and deep neural networks—particularly under model misspecification, where it maintains consistent covariance structure recovery.
📝 Abstract
Variational Bayes methods are popular due to their computational efficiency and adaptability to diverse applications. In specifying the variational family, mean-field classes are commonly used, which enables efficient algorithms such as coordinate ascent variational inference (CAVI) but fails to capture parameter dependence and typically underestimates uncertainty. In this work, we introduce a variational bagging approach that integrates a bagging procedure with variational Bayes, resulting in a bagged variational posterior for improved inference. We establish strong theoretical guarantees, including posterior contraction rates for general models and a Bernstein-von Mises (BVM) type theorem that ensures valid uncertainty quantification. Notably, our results show that even when using a mean-field variational family, our approach can recover off-diagonal elements of the limiting covariance structure and provide proper uncertainty quantification. In addition, variational bagging is robust to model misspecification, with covariance structures matching those of the target covariance. We illustrate our variational bagging method in numerical studies through applications to parametric models, finite mixture models, deep neural networks, and variational autoencoders (VAEs).