🤖 AI Summary
Amortized Bayesian inference suffers significant performance degradation under model misspecification or distributional shift, and existing self-consistent training approaches struggle to adapt to continuously arriving data. This work introduces unsupervised continual learning into this framework for the first time, proposing a decoupled continual learning strategy that combines simulation-based pretraining with sequential self-consistent fine-tuning, experience replay, and Elastic Weight Consolidation (EWC) to mitigate catastrophic forgetting. Evaluated on three benchmark cases, the proposed method effectively alleviates forgetting and yields posterior estimates that substantially outperform those from standard simulation-based training, achieving closer alignment with reference Markov Chain Monte Carlo (MCMC) results.
📝 Abstract
Amortized Bayesian Inference (ABI) enables efficient posterior estimation using generative neural networks trained on simulated data, but often suffers from performance degradation under model misspecification. While self-consistency (SC) training on unlabeled empirical data can enhance network robustness, current approaches are limited to static, single-task settings and fail to handle sequentially arriving data or distribution shifts. We propose a continual learning framework for ABI that decouples simulation-based pre-training from unsupervised sequential SC fine-tuning on real-world data. To address the challenge of catastrophic forgetting, we introduce two adaptation strategies: (1) SC with episodic replay, utilizing a memory buffer of past observations, and (2) SC with elastic weight consolidation, which regularizes updates to preserve task-critical parameters. Across three diverse case studies, our methods significantly mitigate forgetting and yield posterior estimates that outperform standard simulation-based training, achieving estimates closer to MCMC reference, providing a viable path for trustworthy ABI across a range of different tasks.