Enabling Automatic Differentiation with Mollified Graph Neural Operators

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently and accurately computing derivatives—required for physics-informed loss construction—on irregular geometries using Physics-Informed Neural Operators (PINOs). To this end, we propose mGNO, the first graph neural operator supporting native automatic differentiation. Its core innovation lies in the seamless integration of PyTorch’s autograd into the graph neural operator framework, augmented by mollification-based smoothing and geometry-adaptive graph construction, enabling precise gradient computation on arbitrary domains—including irregular grids and unstructured point clouds. mGNO enables mesh-free, purely physics-driven PDE solving and inverse design without labeled supervision. Experiments demonstrate that mGNO achieves a 20× reduction in L² error on regular grids compared to baseline PINOs; on point clouds, it outperforms Meta-PDE by two orders of magnitude in error while accelerating solution time by 1–3 orders of magnitude over traditional numerical solvers.

Technology Category

Application Category

📝 Abstract
Physics-informed neural operators offer a powerful framework for learning solution operators of partial differential equations (PDEs) by combining data and physics losses. However, these physics losses rely on derivatives. Computing these derivatives remains challenging, with spectral and finite difference methods introducing approximation errors due to finite resolution. Here, we propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emph{exact} gradients on arbitrary geometries. This enhancement enables efficient training on irregular grids and varying geometries while allowing seamless evaluation of physics losses at randomly sampled points for improved generalization. For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences, although training was slower. It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough. On these unstructured point clouds, mGNO leads to errors that are consistently 2 orders of magnitude lower than machine learning baselines (Meta-PDE) for comparable runtimes, and also delivers speedups from 1 to 3 orders of magnitude compared to the numerical solver for similar accuracy. mGNOs can also be used to solve inverse design and shape optimization problems on complex geometries.
Problem

Research questions and friction points this paper is trying to address.

Computing exact gradients for PDEs on arbitrary geometries
Improving training efficiency on irregular grids and varying geometries
Enhancing accuracy and speed in solving PDEs compared to baselines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages automatic differentiation for exact gradients
Enables efficient training on irregular grids
Solves PDEs on unstructured point clouds seamlessly
🔎 Similar Papers
No similar papers found.
R
Ryan Y. Lin
Department of Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
Julius Berner
Julius Berner
NVIDIA
Deep LearningApplied Mathematics
Valentin Duruisseaux
Valentin Duruisseaux
Postdoctoral Researcher, Caltech
D
David Pitt
Department of Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
D
Daniel Leibovici
NVIDIA, Santa Clara, CA, USA
Jean Kossaifi
Jean Kossaifi
Senior Research Scientist at NVIDIA
Machine LearningComputer VisionTensor MethodsDeep LearningAffective Computing
K
K. Azizzadenesheli
NVIDIA, Santa Clara, CA, USA
A
A. Anandkumar
Department of Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA