🤖 AI Summary
Bayesian inverse problems governed by physics-based models—such as partial differential equations solved via finite element methods—are computationally intractable for conventional MCMC due to the prohibitive cost of repeated high-fidelity solver evaluations.
Method: We propose a novel multi-fidelity delayed-acceptance MCMC framework that synergistically integrates heterogeneous low-fidelity solvers with an offline-trained multi-fidelity deep neural network surrogate, eliminating high-fidelity solver calls entirely during online sampling. A hierarchical delayed-acceptance mechanism enhances both low-fidelity approximation accuracy and Markov chain mixing efficiency.
Results: Evaluated on two benchmark problems—steady-state groundwater flow and transient reaction–diffusion—we demonstrate substantial acceleration of posterior inference: orders-of-magnitude reduction in computational cost, extended effective chain length, and improved effective sample size per unit time. This establishes a new paradigm for efficient, high-fidelity uncertainty quantification in high-dimensional physical inverse problems.
📝 Abstract
Inverse uncertainty quantification (UQ) tasks such as parameter estimation are computationally demanding whenever dealing with physics-based models, and typically require repeated evaluations of complex numerical solvers. When partial differential equations are involved, full-order models such as those based on the Finite Element Method can make traditional sampling approaches like Markov Chain Monte Carlo (MCMC) computationally infeasible. Although data-driven surrogate models may help reduce evaluation costs, their utility is often limited by the expense of generating high-fidelity data. In contrast, low-fidelity data can be produced more efficiently, although relying on them alone may degrade the accuracy of the inverse UQ solution.
To address these challenges, we propose a Multi-Fidelity Delayed Acceptance scheme for Bayesian inverse problems. Extending the Multi-Level Delayed Acceptance framework, the method introduces multi-fidelity neural networks that combine the predictions of solvers of varying fidelity, with high fidelity evaluations restricted to an offline training stage. During the online phase, likelihood evaluations are obtained by evaluating the coarse solvers and passing their outputs to the trained neural networks, thereby avoiding additional high-fidelity simulations.
This construction allows heterogeneous coarse solvers to be incorporated consistently within the hierarchy, providing greater flexibility than standard Multi-Level Delayed Acceptance. The proposed approach improves the approximation accuracy of the low fidelity solvers, leading to longer sub-chain lengths, better mixing, and accelerated posterior inference. The effectiveness of the strategy is demonstrated on two benchmark inverse problems involving (i) steady isotropic groundwater flow, (ii) an unsteady reaction-diffusion system, for which substantial computational savings are obtained.