🤖 AI Summary
To address weak nonlinear modeling capability and unreliable uncertainty quantification in function-to-function regression, this paper proposes a novel operator learning framework based on deep Gaussian processes (DGPs). The method embeds kernel integral transformations—approximated discretely—into the network architecture, synergistically integrating Gaussian process interpolation with nonlinear activation functions to yield flexible yet interpretable functional mappings. Furthermore, it introduces an scalable variational inference algorithm leveraging inducing points and whitening transformations. Evaluated on sparse, irregular, and noisy functional data, the framework achieves significant improvements in both predictive accuracy and uncertainty calibration. It consistently outperforms existing functional linear models and neural operator methods across diverse benchmarks, including spatiotemporal forecasting, curve-to-curve prediction, and partial differential equation (PDE)-driven tasks.
📝 Abstract
Learning mappings between functional spaces, also known as function-on-function regression, plays a crucial role in functional data analysis and has broad applications, e.g. spatiotemporal forecasting, curve prediction, and climate modeling. Existing approaches, such as functional linear models and neural operators, either fall short of capturing complex nonlinearities or lack reliable uncertainty quantification under noisy, sparse, and irregularly sampled data. To address these issues, we propose Deep Gaussian Processes for Functional Maps (DGPFM). Our method designs a sequence of GP-based linear and nonlinear transformations, leveraging integral transforms of kernels, GP interpolation, and nonlinear activations sampled from GPs. A key insight simplifies implementation: under fixed locations, discrete approximations of kernel integral transforms collapse into direct functional integral transforms, enabling flexible incorporation of various integral transform designs. To achieve scalable probabilistic inference, we use inducing points and whitening transformations to develop a variational learning algorithm. Empirical results on real-world and PDE benchmark datasets demonstrate that the advantage of DGPFM in both predictive performance and uncertainty calibration.