🤖 AI Summary
Existing machine learning solvers for partial differential equations (PDEs) suffer from either low efficiency—due to stochastic neural network training—or poor scalability—since Gaussian processes (GPs) incur prohibitive computational cost in high-dimensional or large-scale collocation settings. To address these limitations, we propose Tensorized Gaussian Process Solver (TGPS), an efficient framework for nonlinear PDEs. TGPS models the high-dimensional solution function as a tensor product of univariate GP factors and achieves scalability via tensor decomposition combined with partial variable freezing. It integrates Newton linearization with alternating least squares optimization, enabling closed-form updates that drastically reduce computational complexity. Theoretical analysis establishes convergence guarantees and rigorous error bounds. Experiments on multiple benchmark nonlinear PDEs demonstrate that TGPS consistently outperforms state-of-the-art methods in both accuracy and computational efficiency.
📝 Abstract
Machine learning solvers for partial differential equations (PDEs) have attracted growing interest. However, most existing approaches, such as neural network solvers, rely on stochastic training, which is inefficient and typically requires a great many training epochs. Gaussian process (GP)/kernel-based solvers, while mathematical principled, suffer from scalability issues when handling large numbers of collocation points often needed for challenging or higher-dimensional PDEs. To overcome these limitations, we propose TGPS, a tensor-GP-based solver that models factor functions along each input dimension using one-dimensional GPs and combines them via tensor decomposition to approximate the full solution. This design reduces the task to learning a collection of one-dimensional GPs, substantially lowering computational complexity, and enabling scalability to massive collocation sets. For efficient nonlinear PDE solving, we use a partial freezing strategy and Newton's method to linerize the nonlinear terms. We then develop an alternating least squares (ALS) approach that admits closed-form updates, thereby substantially enhancing the training efficiency. We establish theoretical guarantees on the expressivity of our model, together with convergence proof and error analysis under standard regularity assumptions. Experiments on several benchmark PDEs demonstrate that our method achieves superior accuracy and efficiency compared to existing approaches.