🤖 AI Summary
This work addresses the joint solution of forward (likelihood sampling) and inverse (posterior sampling) problems within the Bayesian framework. Methodologically, it introduces a novel invertible generative model that constructs a bijective mapping between parameter and observation spaces via stacked upper- and lower-triangular normalizing flows, and proposes a unified training objective for bidirectional conditional sampling—enabling end-to-end optimization of both forward simulation and posterior inference. Its key contribution lies in the first integration of invertible neural networks with Bayesian conditional generation, jointly modeling likelihood and posterior distributions without approximation bias inherent in traditional MCMC or variational inference. Across multiple numerical experiments, the model achieves high-fidelity simultaneous forward generation and inverse inference, significantly improving computational efficiency and statistical consistency of the Bayesian simulation–inference loop.
📝 Abstract
We formulate the inverse problem in a Bayesian framework and aim to train a generative model that allows us to simulate (i.e., sample from the likelihood) and do inference (i.e., sample from the posterior). We review the use of triangular normalizing flows for conditional sampling in this context and show how to combine two such triangular maps (an upper and a lower one) in to one invertible mapping that can be used for simulation and inference. We work out several useful properties of this invertible generative model and propose a possible training loss for training the map directly. We illustrate the workings of this new approach to conditional generative modeling numerically on a few stylized examples.