🤖 AI Summary
This paper addresses the accuracy and generalization bottlenecks in solving inverse problems for scientific computing. We propose the Paired Autoencoder framework, which jointly models latent representations of both observed data and unknown quantities, and co-learns surrogate forward and inverse mappings. Our key contributions are: (1) a novel bidirectional latent-space coupling mechanism that integrates data-driven flexibility with model-driven physical consistency; (2) support for likelihood-free inference, out-of-distribution detection, and latent-space fine-tuning; and (3) a variational-based, sampling-enabled uncertainty quantification paradigm. Evaluated on linear and nonlinear inverse problems—including image inpainting and seismic imaging—the method achieves high-fidelity reconstructions under severe noise, outperforming purely data-driven or purely model-driven baselines across multiple metrics, thereby demonstrating superior robustness and generalization.
📝 Abstract
In this book chapter, we discuss recent advances in data-driven approaches for inverse problems. In particular, we focus on the emph{paired autoencoder} framework, which has proven to be a powerful tool for solving inverse problems in scientific computing. The paired autoencoder framework is a novel approach that leverages the strengths of both data-driven and model-based methods by projecting both the data and the quantity of interest into a latent space and mapping these latent spaces to provide surrogate forward and inverse mappings. We illustrate the advantages of this approach through numerical experiments, including seismic imaging and classical inpainting: nonlinear and linear inverse problems, respectively. Although the paired autoencoder framework is likelihood-free, it generates multiple data- and model-based reconstruction metrics that help assess whether examples are in or out of distribution. In addition to direct model estimates from data, the paired autoencoder enables latent-space refinement to fit the observed data accurately. Numerical experiments show that this procedure, combined with the latent-space initial guess, is essential for high-quality estimates, even when data noise exceeds the training regime. We also introduce two novel variants that combine variational and paired autoencoder ideas, maintaining the original benefits while enabling sampling for uncertainty analysis.