Tensor Gaussian Processes: Efficient Solvers for Nonlinear PDEs

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine learning solvers for partial differential equations (PDEs) suffer from either low efficiency—due to stochastic neural network training—or poor scalability—since Gaussian processes (GPs) incur prohibitive computational cost in high-dimensional or large-scale collocation settings. To address these limitations, we propose Tensorized Gaussian Process Solver (TGPS), an efficient framework for nonlinear PDEs. TGPS models the high-dimensional solution function as a tensor product of univariate GP factors and achieves scalability via tensor decomposition combined with partial variable freezing. It integrates Newton linearization with alternating least squares optimization, enabling closed-form updates that drastically reduce computational complexity. Theoretical analysis establishes convergence guarantees and rigorous error bounds. Experiments on multiple benchmark nonlinear PDEs demonstrate that TGPS consistently outperforms state-of-the-art methods in both accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Machine learning solvers for partial differential equations (PDEs) have attracted growing interest. However, most existing approaches, such as neural network solvers, rely on stochastic training, which is inefficient and typically requires a great many training epochs. Gaussian process (GP)/kernel-based solvers, while mathematical principled, suffer from scalability issues when handling large numbers of collocation points often needed for challenging or higher-dimensional PDEs. To overcome these limitations, we propose TGPS, a tensor-GP-based solver that models factor functions along each input dimension using one-dimensional GPs and combines them via tensor decomposition to approximate the full solution. This design reduces the task to learning a collection of one-dimensional GPs, substantially lowering computational complexity, and enabling scalability to massive collocation sets. For efficient nonlinear PDE solving, we use a partial freezing strategy and Newton's method to linerize the nonlinear terms. We then develop an alternating least squares (ALS) approach that admits closed-form updates, thereby substantially enhancing the training efficiency. We establish theoretical guarantees on the expressivity of our model, together with convergence proof and error analysis under standard regularity assumptions. Experiments on several benchmark PDEs demonstrate that our method achieves superior accuracy and efficiency compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Developing scalable Gaussian process solvers for high-dimensional nonlinear PDEs
Overcoming computational inefficiency in stochastic training of PDE solvers
Addressing scalability issues with large collocation points in PDE solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tensor Gaussian Processes model factor functions using one-dimensional GPs
Partial freezing strategy and Newton's method linearize nonlinear PDE terms
Alternating least squares approach enables closed-form efficient updates
Q
Qiwei Yuan
Kahlert School of Computing, University of Utah
Z
Zhitong Xu
Kahlert School of Computing, University of Utah
Y
Yinghao Chen
Kahlert School of Computing, University of Utah
Y
Yiming Xu
Department of Mathematics, University of Kentucky
H
H. Owhadi
Computing + Mathematical Sciences (CMS) Department, California Institute of Technology
Shandian Zhe
Shandian Zhe
School of Computing, University of Utah
Probabilistic Machine Learning