π€ AI Summary
VBMC suffers from approximation bias under computationally expensive likelihoods, as its conservative exploration often fails to capture multimodal posterior structure. To address this, we propose the first posterior stacking framework that requires no additional likelihood evaluations. Our method constructs a globally consistent posterior estimate by ensembling mixture variational approximations from multiple independent VBMC runs and aggregating component-wise model evidence estimates. Leveraging VBMCβs intrinsic mixture representation, the approach natively supports parallel initialization and distributed fusion, substantially improving exploration robustness and scalability. Evaluated on two synthetic benchmarks and two real-world computational neuroscience tasks, our method reduces KL divergence by 32% on average and increases effective sample size by a factor of 2.1βwithout any extra likelihood calls.
π Abstract
Variational Bayesian Monte Carlo (VBMC) is a sample-efficient method for approximate Bayesian inference with computationally expensive likelihoods. While VBMC's local surrogate approach provides stable approximations, its conservative exploration strategy and limited evaluation budget can cause it to miss regions of complex posteriors. In this work, we introduce Stacking Variational Bayesian Monte Carlo (S-VBMC), a method that constructs global posterior approximations by merging independent VBMC runs through a principled and inexpensive post-processing step. Our approach leverages VBMC's mixture posterior representation and per-component evidence estimates, requiring no additional likelihood evaluations while being naturally parallelizable. We demonstrate S-VBMC's effectiveness on two synthetic problems designed to challenge VBMC's exploration capabilities and two real-world applications from computational neuroscience, showing substantial improvements in posterior approximation quality across all cases.