Deep Gaussian Processes for Functional Maps

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak nonlinear modeling capability and unreliable uncertainty quantification in function-to-function regression, this paper proposes a novel operator learning framework based on deep Gaussian processes (DGPs). The method embeds kernel integral transformations—approximated discretely—into the network architecture, synergistically integrating Gaussian process interpolation with nonlinear activation functions to yield flexible yet interpretable functional mappings. Furthermore, it introduces an scalable variational inference algorithm leveraging inducing points and whitening transformations. Evaluated on sparse, irregular, and noisy functional data, the framework achieves significant improvements in both predictive accuracy and uncertainty calibration. It consistently outperforms existing functional linear models and neural operator methods across diverse benchmarks, including spatiotemporal forecasting, curve-to-curve prediction, and partial differential equation (PDE)-driven tasks.

Technology Category

Application Category

📝 Abstract
Learning mappings between functional spaces, also known as function-on-function regression, plays a crucial role in functional data analysis and has broad applications, e.g. spatiotemporal forecasting, curve prediction, and climate modeling. Existing approaches, such as functional linear models and neural operators, either fall short of capturing complex nonlinearities or lack reliable uncertainty quantification under noisy, sparse, and irregularly sampled data. To address these issues, we propose Deep Gaussian Processes for Functional Maps (DGPFM). Our method designs a sequence of GP-based linear and nonlinear transformations, leveraging integral transforms of kernels, GP interpolation, and nonlinear activations sampled from GPs. A key insight simplifies implementation: under fixed locations, discrete approximations of kernel integral transforms collapse into direct functional integral transforms, enabling flexible incorporation of various integral transform designs. To achieve scalable probabilistic inference, we use inducing points and whitening transformations to develop a variational learning algorithm. Empirical results on real-world and PDE benchmark datasets demonstrate that the advantage of DGPFM in both predictive performance and uncertainty calibration.
Problem

Research questions and friction points this paper is trying to address.

Learning mappings between functional spaces with nonlinear transformations
Addressing limitations in capturing complex nonlinearities and uncertainty
Providing scalable probabilistic inference for functional data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Gaussian Processes for functional mappings
GP-based linear and nonlinear transformations design
Variational learning with inducing points for scalability
🔎 Similar Papers
No similar papers found.
M
Matthew Lowery
Kahlert School of Computing, University of Utah
Z
Zhitong Xu
Kahlert School of Computing, University of Utah
Da Long
Da Long
University of Utah
Machine LearningBayesian Machine LearningAI for Science
K
Keyan Chen
Kahlert School of Computing, University of Utah
D
Daniel S. Johnson
Kahlert School of Computing, University of Utah
Y
Yang Bai
Department of Health and Kinesiology, University of Utah
Varun Shankar
Varun Shankar
Kahlert School of Computing, University of Utah
Scientific Machine LearningScientific Computing
Shandian Zhe
Shandian Zhe
School of Computing, University of Utah
Probabilistic Machine Learning