🤖 AI Summary
This work addresses the challenges of unstable convergence and limited generalization in unsupervised neural operators for solving partial differential equations (PDEs). To this end, the authors propose PhIS-FNO, a method built upon the Fourier Neural Operator (FNO) framework that incorporates a Hermite spline kernel and a multi-stage curriculum-based physics-informed constraint mechanism. This mechanism progressively enforces boundary conditions and interior residual losses, complemented by an optimizer reinitialization strategy. Remarkably, PhIS-FNO achieves accuracy comparable to fully supervised learning on multiple standard PDE benchmarks while requiring only narrow-band boundary labels, thereby significantly enhancing both the stability and generalization capability of unsupervised training.
📝 Abstract
Solving partial differential equations remains a central challenge in scientific machine learning. Neural operators offer a promising route by learning mappings between function spaces and enabling resolution-independent inference, yet they typically require supervised data. Physics-informed neural networks address this limitation through unsupervised training with physical constraints but often suffer from unstable convergence and limited generalization capability. To overcome these issues, we introduce a multi-stage physics-informed training strategy that achieves convergence by progressively enforcing boundary conditions in the loss landscape and subsequently incorporating interior residuals. At each stage the optimizer is re-initialized, acting as a continuation mechanism that restores stability and prevents gradient stagnation. We further propose the Physics-Informed Spline Fourier Neural Operator (PhIS-FNO), combining Fourier layers with Hermite spline kernels for smooth residual evaluation. Across canonical benchmarks, PhIS-FNO attains a level of accuracy comparable to that of supervised learning, using labeled information only along a narrow boundary region, establishing staged, spline-based optimization as a robust paradigm for physics-informed operator learning.