Data-Efficient Time-Dependent PDE Surrogates: Graph Neural Simulators vs Neural Operators

📅 2025-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators exhibit poor generalization under low-data regimes and often fail to explicitly encode causality and local temporal structure inherent in physical evolution; autoregressive modeling further exacerbates error accumulation. Method: We propose a forward surrogate model that synergistically integrates a Graph Neural Simulator (GNS) with explicit time-stepping schemes (e.g., Euler integration), directly learning instantaneous time derivatives to solve time-dependent PDEs—thereby explicitly embedding causality and local dynamics. Additionally, we introduce a PCA- and K-means–based trajectory selection strategy to enhance representation efficiency in data-scarce settings. Contribution/Results: On three canonical PDEs, our method achieves <1% relative L² error using only 3% of the training data. In long-term forecasting, it reduces error by 82.48%–99.86% on average over baselines including DeepONet and FNO, significantly advancing low-data PDE surrogate modeling.

Technology Category

Application Category

📝 Abstract
Neural operators (NOs) approximate mappings between infinite-dimensional function spaces but require large datasets and struggle with scarce training data. Many NO formulations don't explicitly encode causal, local-in-time structure of physical evolution. While autoregressive models preserve causality by predicting next time-steps, they suffer from rapid error accumulation. We employ Graph Neural Simulators (GNS) - a message-passing graph neural network framework - with explicit numerical time-stepping schemes to construct accurate forward models that learn PDE solutions by modeling instantaneous time derivatives. We evaluate our framework on three canonical PDE systems: (1) 2D Burgers' scalar equation, (2) 2D coupled Burgers' vector equation, and (3) 2D Allen-Cahn equation. Rigorous evaluations demonstrate GNS significantly improves data efficiency, achieving higher generalization accuracy with substantially fewer training trajectories compared to neural operator baselines like DeepONet and FNO. GNS consistently achieves under 1% relative L2 errors with only 30 training samples out of 1000 (3% of available data) across all three PDE systems. It substantially reduces error accumulation over extended temporal horizons: averaged across all cases, GNS reduces autoregressive error by 82.48% relative to FNO AR and 99.86% relative to DON AR. We introduce a PCA+KMeans trajectory selection strategy enhancing low-data performance. Results indicate combining graph-based local inductive biases with conventional time integrators yields accurate, physically consistent, and scalable surrogate models for time-dependent PDEs.
Problem

Research questions and friction points this paper is trying to address.

Improving data efficiency for neural PDE surrogates with limited training data
Reducing error accumulation in autoregressive time-dependent PDE modeling
Encoding causal local-in-time structure in neural operator frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Simulators with explicit time-stepping schemes
Message-passing GNN framework modeling instantaneous derivatives
PCA+KMeans trajectory selection for enhanced data efficiency
🔎 Similar Papers
No similar papers found.
D
Dibyajyoti Nayak
Department of Civil and Systems Engineering, Johns Hopkins University, Baltimore, MD, 21218
Somdatta Goswami
Somdatta Goswami
Assistant Professor, Civil and Systems Engineering, Johns Hopkins University
Deep LearningPhysics-informed MLComputational MechanicsFracture Mechanics