🤖 AI Summary
This work addresses the lack of theoretical guarantees on generalization error for physics-informed neural networks (PINNs) solving the incompressible Navier–Stokes equations. Focusing on unsupervised PINNs with depth two, the study establishes, for the first time, a rigorous upper bound on the generalization error that is independent of both network width and problem dimensionality, derived via Rademacher complexity analysis. The bound explicitly reveals the dependence of the generalization gap on fluid viscosity and the loss regularization parameter, motivating the design of a novel activation function tailored for fluid dynamics. Furthermore, the theoretical analysis yields a dimension-independent sample complexity bound. Numerical experiments on the Taylor–Green vortex benchmark demonstrate both the effectiveness of the proposed method and the practical relevance of the derived theoretical bounds.
📝 Abstract
This work establishes rigorous first-of-its-kind upper bounds on the generalization error for the method of approximating solutions to the (d+1)-dimensional incompressible Navier-Stokes equations by training depth-2 neural networks trained via the unsupervised Physics-Informed Neural Network (PINN) framework. This is achieved by bounding the Rademacher complexity of the PINN risk. For appropriately weight bounded net classes our derived generalization bounds do not explicitly depend on the network width and our framework characterizes the generalization gap in terms of the fluid's kinematic viscosity and loss regularization parameters. In particular, the resulting sample complexity bounds are dimension-independent. Our generalization bounds suggest using novel activation functions for solving fluid dynamics. We provide empirical validation of the suggested activation functions and the corresponding bounds on a PINN setup solving the Taylor-Green vortex benchmark.