Unsupervised Continual Learning for Amortized Bayesian Inference

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Amortized Bayesian inference suffers significant performance degradation under model misspecification or distributional shift, and existing self-consistent training approaches struggle to adapt to continuously arriving data. This work introduces unsupervised continual learning into this framework for the first time, proposing a decoupled continual learning strategy that combines simulation-based pretraining with sequential self-consistent fine-tuning, experience replay, and Elastic Weight Consolidation (EWC) to mitigate catastrophic forgetting. Evaluated on three benchmark cases, the proposed method effectively alleviates forgetting and yields posterior estimates that substantially outperform those from standard simulation-based training, achieving closer alignment with reference Markov Chain Monte Carlo (MCMC) results.

Technology Category

Application Category

📝 Abstract
Amortized Bayesian Inference (ABI) enables efficient posterior estimation using generative neural networks trained on simulated data, but often suffers from performance degradation under model misspecification. While self-consistency (SC) training on unlabeled empirical data can enhance network robustness, current approaches are limited to static, single-task settings and fail to handle sequentially arriving data or distribution shifts. We propose a continual learning framework for ABI that decouples simulation-based pre-training from unsupervised sequential SC fine-tuning on real-world data. To address the challenge of catastrophic forgetting, we introduce two adaptation strategies: (1) SC with episodic replay, utilizing a memory buffer of past observations, and (2) SC with elastic weight consolidation, which regularizes updates to preserve task-critical parameters. Across three diverse case studies, our methods significantly mitigate forgetting and yield posterior estimates that outperform standard simulation-based training, achieving estimates closer to MCMC reference, providing a viable path for trustworthy ABI across a range of different tasks.
Problem

Research questions and friction points this paper is trying to address.

Amortized Bayesian Inference
Unsupervised Continual Learning
Model Misspecification
Catastrophic Forgetting
Self-Consistency Training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
Amortized Bayesian Inference
Self-Consistency
Catastrophic Forgetting
Elastic Weight Consolidation
🔎 Similar Papers
No similar papers found.
A
Aayush Mishra
Department of Statistics, TU Dortmund University, Germany
Š
Šimon Kucharský
Department of Statistics, TU Dortmund University, Germany
Paul-Christian Bürkner
Paul-Christian Bürkner
Full Professor of Computational Statistics, TU Dortmund University
Bayesian StatisticsUncertainty QuantificationSimulation-Based InferencePrior Specification