Solving Roughly Forced Nonlinear PDEs via Misspecified Kernel Methods and Neural Networks

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional Gaussian processes (GPs) and physics-informed neural networks (PINNs) exhibit limited accuracy and convergence for nonlinear stochastic PDEs driven by rough source terms—particularly those requiring pathwise solutions—due to their reliance on high solution regularity. Method: We propose NeS-PINN, a novel paradigm integrating misspecified-kernel GPs with a weak-form variational principle, and—crucially—introduce negative Sobolev norms into the PDE-constrained loss for the first time. This yields a conditional framework with relaxed regularity requirements on the solution. Contribution/Results: NeS-PINN provides provable convergence guarantees even under highly irregular source terms, significantly enhancing numerical stability and accuracy. It is the first mesh-free stochastic PDE solver that simultaneously leverages deep learning’s representational capacity, GP-based uncertainty quantification, and rigorous theoretical foundations grounded in functional analysis and stochastic calculus.

Technology Category

Application Category

📝 Abstract
We consider the use of Gaussian Processes (GPs) or Neural Networks (NNs) to numerically approximate the solutions to nonlinear partial differential equations (PDEs) with rough forcing or source terms, which commonly arise as pathwise solutions to stochastic PDEs. Kernel methods have recently been generalized to solve nonlinear PDEs by approximating their solutions as the maximum a posteriori estimator of GPs that are conditioned to satisfy the PDE at a finite set of collocation points. The convergence and error guarantees of these methods, however, rely on the PDE being defined in a classical sense and its solution possessing sufficient regularity to belong to the associated reproducing kernel Hilbert space. We propose a generalization of these methods to handle roughly forced nonlinear PDEs while preserving convergence guarantees with an oversmoothing GP kernel that is misspecified relative to the true solution's regularity. This is achieved by conditioning a regular GP to satisfy the PDE with a modified source term in a weak sense (when integrated against a finite number of test functions). This is equivalent to replacing the empirical $L^2$-loss on the PDE constraint by an empirical negative-Sobolev norm. We further show that this loss function can be used to extend physics-informed neural networks (PINNs) to stochastic equations, thereby resulting in a new NN-based variant termed Negative Sobolev Norm-PINN (NeS-PINN).
Problem

Research questions and friction points this paper is trying to address.

Gaussian Processes
Neural Networks
Complex Source Term Estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Process Adjustment
NeS-PINN
Complex Equation Estimation
🔎 Similar Papers
No similar papers found.
M
Matthieu Darcy
Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
E
Edoardo Calvello
Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
Ricardo Baptista
Ricardo Baptista
University of Toronto
uncertainty quantificationinverse problemsdata assimilationcomputational statistics
H
H. Owhadi
Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
A
Andrew M. Stuart
Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
Xianjin Yang
Xianjin Yang
California Institute of Technology
Partial Differential EquationsMean Field GamesOptimizationGaussian Processes