🤖 AI Summary
Existing neural operators struggle to perform reliable uncertainty quantification for stochastic partial differential equations (SPDEs). To address this limitation, this work proposes Diffusion Last Layer (DLL), a lightweight, plug-and-play probabilistic head that can be integrated with any neural operator backbone. DLL introduces, for the first time, a low-rank Karhunen–Loève expansion in function space to efficiently model the output distribution. This approach significantly enhances generalization and uncertainty-aware performance in SPDE operator learning, while also improving the stability of long-time rollouts and the calibration of epistemic uncertainty estimates.
📝 Abstract
Neural operators have emerged as a powerful paradigm for learning discretization-invariant function-to-function mappings in scientific computing. However, many practical systems are inherently stochastic, making principled uncertainty quantification essential for reliable deployment. To address this, we introduce a simple add-on, the diffusion last layer (DLL), a lightweight probabilistic head that can be attached to arbitrary neural operator backbones to model predictive uncertainty. Motivated by the relative smoothness and low-dimensional structure often exhibited by PDE solution distributions, DLL parameterizes the conditional output distribution directly in function space through a low-rank Karhunen-Lo\`eve expansion, enabling efficient and expressive uncertainty modeling. Across stochastic PDE operator learning benchmarks, DLL improves generalization and uncertainty-aware prediction. Moreover, even in deterministic long-horizon rollout settings, DLL enhances rollout stability and provides meaningful estimates of epistemic uncertainty for backbone neural operators.