🤖 AI Summary
Existing PDE surrogate models often neglect physical priors, resulting in weak temporal modeling capability and poor extrapolation performance. To address this, we propose NODE-ONet—a physics-informed deep neural ODE operator network—that integrates physical knowledge via an encoder-decoder architecture: spatially discretized representations are encoded into a latent space, where time evolution is governed by a neural ODE explicitly constrained by PDE dynamics through physics-informed embedding. This design significantly enhances long-term temporal extrapolation and cross-equation-family transferability, overcoming key bottlenecks of conventional operator networks in dynamic modeling and computational complexity. Experiments on nonlinear diffusion-reaction equations and the Navier–Stokes equations demonstrate that NODE-ONet achieves high accuracy, low inference cost, and strong generalization. It establishes a new paradigm for physics-driven deep operator learning.
📝 Abstract
Operator learning has emerged as a promising paradigm for developing efficient surrogate models to solve partial differential equations (PDEs). However, existing approaches often overlook the domain knowledge inherent in the underlying PDEs and hence suffer from challenges in capturing temporal dynamics and generalization issues beyond training time frames. This paper introduces a deep neural ordinary differential equation (ODE) operator network framework, termed NODE-ONet, to alleviate these limitations. The framework adopts an encoder-decoder architecture comprising three core components: an encoder that spatially discretizes input functions, a neural ODE capturing latent temporal dynamics, and a decoder reconstructing solutions in physical spaces. Theoretically, error analysis for the encoder-decoder architecture is investigated. Computationally, we propose novel physics-encoded neural ODEs to incorporate PDE-specific physical properties. Such well-designed neural ODEs significantly reduce the framework's complexity while enhancing numerical efficiency, robustness, applicability, and generalization capacity. Numerical experiments on nonlinear diffusion-reaction and Navier-Stokes equations demonstrate high accuracy, computational efficiency, and prediction capabilities beyond training time frames. Additionally, the framework's flexibility to accommodate diverse encoders/decoders and its ability to generalize across related PDE families further underscore its potential as a scalable, physics-encoded tool for scientific machine learning.