Probabilistic neural operators for functional uncertainty quantification

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators for complex dynamical systems (e.g., weather forecasting) typically neglect model and data uncertainty. To address this limitation, we introduce Probabilistic Neural Operators (PNOs), the first framework to rigorously incorporate uncertainty quantification into neural operator learning. PNOs directly learn output probability distributions over function spaces, unifying probabilistic modeling and operator learning via strictly proper scoring rules—such as the Continuous Ranked Probability Score—ensuring theoretical soundness and seamless integration with existing neural operator architectures. Evaluated across diverse benchmarks, PNOs consistently outperform deterministic neural operators and prior probabilistic variants: they yield well-calibrated predictive distributions, exhibit robust characterization of long-term trajectory uncertainty, improve extreme-event detection, and scale effectively to large-scale physical models.

Technology Category

Application Category

📝 Abstract
Neural operators aim to approximate the solution operator of a system of differential equations purely from data. They have shown immense success in modeling complex dynamical systems across various domains. However, the occurrence of uncertainties inherent in both model and data has so far rarely been taken into account extemdash{}a critical limitation in complex, chaotic systems such as weather forecasting. In this paper, we introduce the probabilistic neural operator (PNO), a framework for learning probability distributions over the output function space of neural operators. PNO extends neural operators with generative modeling based on strictly proper scoring rules, integrating uncertainty information directly into the training process. We provide a theoretical justification for the approach and demonstrate improved performance in quantifying uncertainty across different domains and with respect to different baselines. Furthermore, PNO requires minimal adjustment to existing architectures, shows improved performance for most probabilistic prediction tasks, and leads to well-calibrated predictive distributions and adequate uncertainty representations even for long dynamical trajectories. Implementing our approach into large-scale models for physical applications can lead to improvements in corresponding uncertainty quantification and extreme event identification, ultimately leading to a deeper understanding of the prediction of such surrogate models.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in neural operator outputs
Integrate uncertainty into neural operator training
Improve uncertainty representation in dynamical systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic neural operators
Generative modeling integration
Uncertainty quantification enhancement