🤖 AI Summary
In group inverse problems, the unknown distribution of observational noise poses a fundamental challenge to reliable parameter inference.
Method: We propose a novel framework that jointly estimates both the physical model parameter distribution and the noise distribution. Our approach integrates coupled gradient descent with an adaptive empirical measure–driven active learning strategy, enabling the first differentiable surrogate modeling of black-box, nonsmooth physics solvers. It unifies blind deconvolution and distributional inversion within a single optimization pipeline. Key components include parametric noise modeling, physics-informed loss functions, and structure-aware gradient optimization to ensure physical consistency and convergence.
Results: Experiments on canonical tasks—including porous media flow and damped elastodynamic systems—demonstrate significant improvements in parameter distribution estimation accuracy and computational efficiency. The method operates without prior knowledge of the noise distribution, exhibiting strong robustness and generalization across diverse physical domains.
📝 Abstract
This work is focussed on the inversion task of inferring the distribution over parameters of interest leading to multiple sets of observations. The potential to solve such distributional inversion problems is driven by increasing availability of data, but a major roadblock is blind deconvolution, arising when the observational noise distribution is unknown. However, when data originates from collections of physical systems, a population, it is possible to leverage this information to perform deconvolution. To this end, we propose a methodology leveraging large data sets of observations, collected from different instantiations of the same physical processes, to simultaneously deconvolve the data corrupting noise distribution, and to identify the distribution over model parameters defining the physical processes. A parameter-dependent mathematical model of the physical process is employed. A loss function characterizing the match between the observed data and the output of the mathematical model is defined; it is minimized as a function of the both the parameter inputs to the model of the physics and the parameterized observational noise. This coupled problem is addressed with a modified gradient descent algorithm that leverages specific structure in the noise model. Furthermore, a new active learning scheme is proposed, based on adaptive empirical measures, to train a surrogate model to be accurate in parameter regions of interest; this approach accelerates computation and enables automatic differentiation of black-box, potentially nondifferentiable, code computing parameter-to-solution maps. The proposed methodology is demonstrated on porous medium flow, damped elastodynamics, and simplified models of atmospheric dynamics.