🤖 AI Summary
This work addresses the challenge of efficiently and accurately computing derivatives—required for physics-informed loss construction—on irregular geometries using Physics-Informed Neural Operators (PINOs). To this end, we propose mGNO, the first graph neural operator supporting native automatic differentiation. Its core innovation lies in the seamless integration of PyTorch’s autograd into the graph neural operator framework, augmented by mollification-based smoothing and geometry-adaptive graph construction, enabling precise gradient computation on arbitrary domains—including irregular grids and unstructured point clouds. mGNO enables mesh-free, purely physics-driven PDE solving and inverse design without labeled supervision. Experiments demonstrate that mGNO achieves a 20× reduction in L² error on regular grids compared to baseline PINOs; on point clouds, it outperforms Meta-PDE by two orders of magnitude in error while accelerating solution time by 1–3 orders of magnitude over traditional numerical solvers.
📝 Abstract
Physics-informed neural operators offer a powerful framework for learning solution operators of partial differential equations (PDEs) by combining data and physics losses. However, these physics losses rely on derivatives. Computing these derivatives remains challenging, with spectral and finite difference methods introducing approximation errors due to finite resolution. Here, we propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emph{exact} gradients on arbitrary geometries. This enhancement enables efficient training on irregular grids and varying geometries while allowing seamless evaluation of physics losses at randomly sampled points for improved generalization. For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences, although training was slower. It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough. On these unstructured point clouds, mGNO leads to errors that are consistently 2 orders of magnitude lower than machine learning baselines (Meta-PDE) for comparable runtimes, and also delivers speedups from 1 to 3 orders of magnitude compared to the numerical solver for similar accuracy. mGNOs can also be used to solve inverse design and shape optimization problems on complex geometries.