🤖 AI Summary
Existing neural PDE solvers lack rigorous theoretical guarantees linking residual errors to solution-space errors, making their generalization performance difficult to quantify. This work establishes a unified theoretical framework that, under the assumption of a compact solution subset, leverages functional space analysis, generalization theory, and probabilistic inequalities to derive, for the first time, explicit and certifiable deterministic and probabilistic generalization bounds relating pointwise collocation residuals, initial condition errors, and boundary condition errors to the overall solution error. By bridging this gap, the study fills a critical theoretical void in physics-informed neural networks regarding solution error control and provides rigorous convergence and reliability guarantees for residual-based training methodologies.
📝 Abstract
Uncertainty quantification for partial differential equations is traditionally grounded in discretization theory, where solution error is controlled via mesh/grid refinement. Physics-informed neural networks fundamentally depart from this paradigm: they approximate solutions by minimizing residual losses at collocation points, introducing new sources of error arising from optimization, sampling, representation, and overfitting. As a result, the generalization error in the solution space remains an open problem.
Our main theoretical contribution establishes generalization bounds that connect residual control to solution-space error. We prove that when neural approximations lie in a compact subset of the solution space, vanishing residual error guarantees convergence to the true solution. We derive deterministic and probabilistic convergence results and provide certified generalization bounds translating residual, boundary, and initial errors into explicit solution error guarantees.