🤖 AI Summary
This work addresses the ill-posedness of inverse problems governed by partial differential equations, which arises from data noise, missing observations, and non-uniqueness, and for which existing Bayesian methods struggle to enforce hard physical constraints effectively. The authors propose a dual-space sampling framework that uniquely integrates the augmented Lagrangian method, the alternating direction method of multipliers (ADMM), and Stein variational gradient descent (SVGD) to transform hard constraints into differentiable penalty terms. This approach enables efficient posterior sampling while strictly satisfying physical laws. It combines the well-conditioned nature of dual solvers with the nonparametric expressiveness of SVGD. Experiments on Rosenbrock inference, Gaussian anomaly modeling, and Marmousi II full-waveform inversion demonstrate that the method yields well-calibrated uncertainty estimates, with posterior distributions converging stably as data coverage increases.
📝 Abstract
Inverse problems constrained by partial differential equations are often ill-conditioned due to noisy and incomplete data or inherent non-uniqueness. A prominent example is full waveform inversion, which estimates Earth's subsurface properties by fitting seismic measurements subject to the wave equation, where ill-conditioning is inherent to noisy, band-limited, finite-aperture data and shadow zones. Casting the inverse problem into a Bayesian framework allows for a more comprehensive description of its solution, where instead of a single estimate, the posterior distribution characterizes non-uniqueness and can be sampled to quantify uncertainty. However, no clear procedure exists for translating hard physical constraints, such as the wave equation, into prior distributions amenable to existing sampling techniques. To address this, we perform posterior sampling in the dual space using an augmented Lagrangian formulation, which translates hard constraints into penalties amenable to sampling algorithms while ensuring their exact satisfaction. We achieve this by seamlessly integrating the alternating direction method of multipliers (ADMM) with Stein variational gradient descent (SVGD) -- a particle-based sampler -- where the constraint is relaxed at each iteration and multiplier updates progressively enforce satisfaction. This enables constrained posterior sampling while inheriting the favorable conditioning properties of dual-space solvers, where partial constraint relaxation allows productive updates even when the current model is far from the true solution. We validate the method on a stylized Rosenbrock conditional inference problem and on frequency-domain full waveform inversion for a Gaussian anomaly model and the Marmousi~II benchmark, demonstrating well-calibrated uncertainty estimates and posterior contraction with increasing data coverage.