Toward Efficient Kernel-Based Solvers for Nonlinear PDEs

📅 2024-10-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficiently solving nonlinear partial differential equations (PDEs) at large-scale collocation points remains challenging due to the high computational and memory costs of conventional kernel-based methods, particularly those requiring coupled solution-derivative Gram matrices. Method: We propose a derivative-free operator-embedding–free kernel interpolation framework: the solution is approximated via standard kernel interpolation, and its derivatives are computed explicitly by differentiating the interpolant—eliminating the need for derivative-coupled Gram matrices. Crucially, differentiation is fully decoupled from the kernel function; combined with structured grids and product kernels, this induces Kronecker-product structure in the system matrices. Contribution/Results: The resulting method achieves optimal O(N) memory complexity and O(N log N) computational complexity. We establish theoretical convergence rates and validate the framework on multiple benchmark nonlinear PDEs, demonstrating high accuracy, excellent scalability, and robust performance up to million-point collocation problems.

Technology Category

Application Category

📝 Abstract
This paper introduces a novel kernel learning framework toward efficiently solving nonlinear partial differential equations (PDEs). In contrast to the state-of-the-art kernel solver that embeds differential operators within kernels, posing challenges with a large number of collocation points, our approach eliminates these operators from the kernel. We model the solution using a standard kernel interpolation form and differentiate the interpolant to compute the derivatives. Our framework obviates the need for complex Gram matrix construction between solutions and their derivatives, allowing for a straightforward implementation and scalable computation. As an instance, we allocate the collocation points on a grid and adopt a product kernel, which yields a Kronecker product structure in the interpolation. This structure enables us to avoid computing the full Gram matrix, reducing costs and scaling efficiently to a large number of collocation points. We provide a proof of the convergence and rate analysis of our method under appropriate regularity assumptions. In numerical experiments, we demonstrate the advantages of our method in solving several benchmark PDEs.
Problem

Research questions and friction points this paper is trying to address.

Efficiently solving nonlinear PDEs with kernel methods
Reducing computational cost in kernel-based PDE solvers
Scalable framework for large collocation point sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eliminates differential operators from kernel
Uses standard kernel interpolation for derivatives
Leverages Kronecker product for scalable computation
🔎 Similar Papers
No similar papers found.
Z
Zhitong Xu
Kahlert School of Computing, The University of Utah
Da Long
Da Long
University of Utah
Machine LearningBayesian Machine LearningAI for Science
Y
Yiming Xu
Department of Mathematics, University of Kentucky
G
Guang Yang
Kahlert School of Computing, The University of Utah
Shandian Zhe
Shandian Zhe
School of Computing, University of Utah
Probabilistic Machine Learning
H
H. Owhadi
Department of Computing and Mathematical Sciences, California Institute of Technology