Scalable Bayesian Monte Carlo: fast uncertainty estimation beyond deep ensembles

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and poor scalability of uncertainty estimation in Bayesian deep learning. We propose Scalable Bayesian Monte Carlo (SBMC), the first method to integrate parallel Sequential Monte Carlo (SMC) with Markov Chain Monte Carlo (MCMC) anchored at the maximum a posteriori (MAP) estimate, enabling efficient interpolation between point estimation and posterior sampling. SBMC significantly reduces computational overhead—comparable to state-of-the-art (SOTA) methods—while matching or exceeding the predictive accuracy of deep ensembles on MNIST, CIFAR, and IMDb. Moreover, it substantially improves the quality and calibration of epistemic uncertainty quantification. The core contribution lies in simultaneously achieving scalability, predictive accuracy, and high-fidelity uncertainty modeling, establishing a practical new paradigm for large-scale Bayesian deep learning.

Technology Category

Application Category

📝 Abstract
This work introduces a new method called scalable Bayesian Monte Carlo (SBMC). The model interpolates between a point estimator and the posterior, and the algorithm is a parallel implementation of a consistent (asymptotically unbiased) Bayesian deep learning algorithm: sequential Monte Carlo (SMC) or Markov chain Monte Carlo (MCMC). The method is motivated theoretically, and its utility is demonstrated on practical examples: MNIST, CIFAR, IMDb. A systematic numerical study reveals that parallel implementations of SMC and MCMC are comparable to serial implementations in terms of performance and total cost, and they achieve accuracy at or beyond the state-of-the-art (SOTA) methods like deep ensembles at convergence, along with substantially improved uncertainty quantification (UQ)--in particular, epistemic UQ. But even parallel implementations are expensive, with an irreducible time barrier much larger than the cost of the MAP estimator. Compressing time further leads to rapid degradation of accuracy, whereas UQ remains valuable. By anchoring to a point estimator we can recover accuracy, while retaining valuable UQ, ultimately delivering strong performance across metrics for a cost comparable to the SOTA.
Problem

Research questions and friction points this paper is trying to address.

Develops scalable Bayesian Monte Carlo for uncertainty estimation
Compares parallel SMC/MCMC performance with serial implementations
Balances accuracy and uncertainty quantification cost-effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel implementation of SMC and MCMC
Interpolates between point estimator and posterior
Anchors to point estimator for accuracy recovery
🔎 Similar Papers
No similar papers found.
X
Xinzhu Liang
Mathematics Department, University of Manchester
J
J. Lukens
School of Electrical and Computer Engineering, Purdue University; Quantum Information Science Section, Oak Ridge National Laboratory
S
Sanjaya Lohani
Department of Electrical and Computer Engineering, Southern Methodist University
Brian T. Kirby
Brian T. Kirby
US Army Research Laboratory
Quantum Information
T
T. Searles
Department of Electrical and Computer Engineering, University of Illinois Chicago
Xin Qiu
Xin Qiu
Cognizant AI Labs
Neural Architecture SearchUncertainty QuantificationEvolutionary Computation
Kody J. H. Law
Kody J. H. Law
Professor at the University of Manchester and AI Research Scientist at Meta
AIMachine LearningComputational StatisticsComputational Applied Mathematics