Linearization Turns Neural Operators into Function-Valued Gaussian Processes

📅 2024-06-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators lack interpretable uncertainty quantification (UQ) for high-stakes scientific simulations. Method: We propose LUNO—a framework that maps a Gaussian prior over weights to a Gaussian process posterior in function space via model linearization, enabling the first function-space interpretation of Bayesian uncertainty for neural operators. Built upon linearized Laplace approximation and functional currying, LUNO requires no architectural modification, is resolution-agnostic, incurs no retraining cost, and delivers plug-and-play posterior inference. Results: Evaluated on Fourier Neural Operators (FNOs), LUNO achieves well-calibrated predictive uncertainty with negligible computational overhead, scales to large models and datasets, and establishes the first general-purpose, efficient, and scalable UQ paradigm for neural operators—advancing reliability in scientific simulation.

Technology Category

Application Category

📝 Abstract
Neural operators generalize neural networks to learn mappings between function spaces from data. They are commonly used to learn solution operators of parametric partial differential equations (PDEs) or propagators of time-dependent PDEs. However, to make them useful in high-stakes simulation scenarios, their inherent predictive error must be quantified reliably. We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators. Our approach leverages model linearization to push (Gaussian) weight-space uncertainty forward to the neural operator's predictions. We show that this can be interpreted as a probabilistic version of the concept of currying from functional programming, yielding a function-valued (Gaussian) random process belief. Our framework provides a practical yet theoretically sound way to apply existing Bayesian deep learning methods such as the linearized Laplace approximation to neural operators. Just as the underlying neural operator, our approach is resolution-agnostic by design. The method adds minimal prediction overhead, can be applied post-hoc without retraining the network, and scales to large models and datasets. We evaluate these aspects in a case study on Fourier neural operators.
Problem

Research questions and friction points this paper is trying to address.

Neural Operators
Gaussian Processes
Uncertainty Quantification
Innovation

Methods, ideas, or system contributions that make the work stand out.

LUNO
Bayesian Deep Learning
Prediction Uncertainty
🔎 Similar Papers
No similar papers found.