🤖 AI Summary
This work addresses the reliability of simulation-based Bayesian inference (SBI) under model misspecification by systematically analyzing the sources of posterior bias and proposing a unified robust framework. Methodologically, it identifies and unifies three classes of robust strategies—robust summary statistics, generalized Bayesian updates, and error modeling/adjustment parameters—and integrates generalized Bayesian theory, error-calibrated modeling, and robust statistical learning for theoretical development and simulation-based validation. Results demonstrate that standard SBI methods exhibit substantial posterior deviation from ground truth under misspecification, whereas the reviewed robust approaches significantly mitigate such bias, enhancing posterior accuracy, calibration, and generalizability. This study establishes a verifiable theoretical foundation and practical guidelines for deploying SBI in real-world complex systems where model fidelity cannot be guaranteed.
📝 Abstract
Simulation-based Bayesian inference (SBI) methods are widely used for parameter estimation in complex models where evaluating the likelihood is challenging but generating simulations is relatively straightforward. However, these methods commonly assume that the simulation model accurately reflects the true data-generating process, an assumption that is frequently violated in realistic scenarios. In this paper, we focus on the challenges faced by SBI methods under model misspecification. We consolidate recent research aimed at mitigating the effects of misspecification, highlighting three key strategies: i) robust summary statistics, ii) generalised Bayesian inference, and iii) error modelling and adjustment parameters. To illustrate both the vulnerabilities of popular SBI methods and the effectiveness of misspecification-robust alternatives, we present empirical results on an illustrative example.