๐ค AI Summary
Physics-informed neural operators lack variational consistency in PDE solvingโsmall residual does not guarantee small solution error.
Method: We propose a variationally rigorous framework based on the First-Order System Least Squares (FOSLS) formulation, ensuring the loss function is equivalent to the solution error measured in the PDE-induced norm. Specifically: (1) we design the first variationally correct training objective for neural operators; (2) we introduce the provably convergent Reduced Basis Neural Operator (RBNO) architecture; and (3) we develop a variational enhancement technique that uniformly handles mixed boundary conditions and rigorously interprets the residual loss as a reliable, computable a posteriori error estimator.
Results: On steady-state diffusion and linear elasticity problems, our method significantly outperforms standard baselines in PDE-compatible norms. Numerical experiments strictly validate the theoretical error bounds, and the residual provides an efficient, computationally tractable guide for adaptive solution refinement.
๐ Abstract
Minimizing PDE-residual losses is a common strategy to promote physical consistency in neural operators. However, standard formulations often lack variational correctness, meaning that small residuals do not guarantee small solution errors due to the use of non-compliant norms or ad hoc penalty terms for boundary conditions. This work develops a variationally correct operator learning framework by constructing first-order system least-squares (FOSLS) objectives whose values are provably equivalent to the solution error in PDE-induced norms. We demonstrate this framework on stationary diffusion and linear elasticity, incorporating mixed Dirichlet-Neumann boundary conditions via variational lifts to preserve norm equivalence without inconsistent penalties. To ensure the function space conformity required by the FOSLS loss, we propose a Reduced Basis Neural Operator (RBNO). The RBNO predicts coefficients for a pre-computed, conforming reduced basis, thereby ensuring variational stability by design while enabling efficient training. We provide a rigorous convergence analysis that bounds the total error by the sum of finite element discretization bias, reduced basis truncation error, neural network approximation error, and statistical estimation errors arising from finite sampling and optimization. Numerical benchmarks validate these theoretical bounds and demonstrate that the proposed approach achieves superior accuracy in PDE-compliant norms compared to standard baselines, while the residual loss serves as a reliable, computable a posteriori error estimator.