🤖 AI Summary
This work addresses the challenging problem of posterior inference in complex systems with intractable likelihoods. We propose ConDiSim, the first conditional diffusion model systematically designed for simulation-based inference: it leverages observation-conditioned forward noising and reverse denoising processes to efficiently approximate high-dimensional, multimodal posterior distributions. Built upon the Denoising Diffusion Probabilistic Model (DDPM) framework, ConDiSim employs a conditional U-Net to parameterize the reverse process and integrates reparameterization with gradient-guided training for improved stability and fidelity. The method combines theoretical rigor with engineering scalability. Evaluated on ten benchmark tasks and two real-world problems, ConDiSim achieves state-of-the-art posterior approximation accuracy, exhibits robust training dynamics, and enables fast inference—substantially outperforming both Approximate Bayesian Computation (ABC) and normalizing flow–based approaches.
📝 Abstract
We present a conditional diffusion model - ConDiSim, for simulation-based inference of complex systems with intractable likelihoods. ConDiSim leverages denoising diffusion probabilistic models to approximate posterior distributions, consisting of a forward process that adds Gaussian noise to parameters, and a reverse process learning to denoise, conditioned on observed data. This approach effectively captures complex dependencies and multi-modalities within posteriors. ConDiSim is evaluated across ten benchmark problems and two real-world test problems, where it demonstrates effective posterior approximation accuracy while maintaining computational efficiency and stability in model training. ConDiSim offers a robust and extensible framework for simulation-based inference, particularly suitable for parameter inference workflows requiring fast inference methods.