🤖 AI Summary
To address function representation bias and limited generalization arising from modeling on discrete grids in conventional autoencoders and variational autoencoders (VAEs), this paper introduces the Function-space Autoencoder (FAE) and Function-space VAE (FVAE)—the first generative modeling framework rigorously defined on infinite-dimensional continuous function spaces. Methodologically, it integrates neural operators (e.g., DeepONet, TFNO), function-space variational inference, continuous-domain reconstruction loss, and SDE-based data compatibility modeling, enabling arbitrary-resolution evaluation and seamless cross-grid generalization. Key contributions are: (1) FAE overcomes the fundamental limitation of FVAE—namely, the undefinedness of target functionals in function space—by introducing a well-posed functional encoding-decoding paradigm; and (2) it significantly improves robustness and accuracy in scientific data imputation, image super-resolution, and PDE solution generation. Empirical results demonstrate superior performance over grid-based baselines across diverse resolution regimes and domain shifts.
📝 Abstract
Autoencoders have found widespread application in both their original deterministic form and in their variational formulation (VAEs). In scientific applications and in image processing it is often of interest to consider data that are viewed as functions; while discretisation (of differential equations arising in the sciences) or pixellation (of images) renders problems finite dimensional in practice, conceiving first of algorithms that operate on functions, and only then discretising or pixellating, leads to better algorithms that smoothly operate between resolutions. In this paper function-space versions of the autoencoder (FAE) and variational autoencoder (FVAE) are introduced, analysed, and deployed. Well-definedness of the objective governing VAEs is a subtle issue, particularly in function space, limiting applicability. For the FVAE objective to be well defined requires compatibility of the data distribution with the chosen generative model; this can be achieved, for example, when the data arise from a stochastic differential equation, but is generally restrictive. The FAE objective, on the other hand, is well defined in many situations where FVAE fails to be. Pairing the FVAE and FAE objectives with neural operator architectures that can be evaluated on any mesh enables new applications of autoencoders to inpainting, superresolution, and generative modelling of scientific data.