🤖 AI Summary
This work addresses the poor long-term stability and generalization of neural PDE surrogate models that directly predict system states. We propose a novel paradigm: predicting time derivatives followed by ODE integration. Methodologically, we build upon the neural ODE framework, jointly optimizing a physics-informed loss and a self-supervised temporal differencing regularization, enabling implicit differentiation and adaptive-step integration. Our key contribution lies in decoupling physical constraints from data fitting—shifting the learning objective from “predicting states” to “predicting dynamics”—thereby substantially mitigating error accumulation. Evaluated on canonical PDE benchmarks—including Burgers’ and Navier–Stokes equations—our approach achieves a 38% reduction in average relative error, improves extrapolation capability by 2.1×, and accelerates training convergence by 1.6×, while maintaining high accuracy, strong numerical stability, and flexible inference.