🤖 AI Summary
Bayesian inference often faces a trade-off between computational efficiency and posterior accuracy, especially across multiple datasets. This paper proposes an adaptive hybrid inference workflow that—uniquely—integrates amortized variational inference (AVI) with Markov chain Monte Carlo (MCMC) in a dynamically coordinated manner. Leveraging principled posterior diagnostics, it constructs a Pareto frontier to enable automatic, optimal switching between AVI and MCMC. Computational reuse and scheduling optimization further boost inference throughput. The method unifies generative neural network modeling, MCMC refinement, and verifiable diagnostic mechanisms. Evaluated on tens of thousands of real and synthetic datasets, it achieves a 3.2× average speedup over standalone AVI or MCMC baselines, while preserving posterior fidelity—reducing KL divergence by 47% and increasing effective sample size (ESS) by 2.8×. This work delivers a scalable, efficient, and trustworthy solution for large-scale Bayesian inference.
📝 Abstract
Bayesian inference often faces a trade-off between computational speed and sampling accuracy. We propose an adaptive workflow that integrates rapid amortized inference with gold-standard MCMC techniques to achieve a favorable combination of both speed and accuracy when performing inference on many observed datasets. Our approach uses principled diagnostics to guide the choice of inference method for each dataset, moving along the Pareto front from fast amortized sampling via generative neural networks to slower but guaranteed-accurate MCMC when needed. By reusing computations across steps, our workflow synergizes amortized and MCMC-based inference. We demonstrate the effectiveness of this integrated approach on several synthetic and real-world problems with tens of thousands of datasets, showing efficiency gains while maintaining high posterior quality.