PhysicsCorrect: A Training-Free Approach for Stable Neural PDE Simulations

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural PDE solvers often diverge during long-horizon rollout due to error accumulation. To address this, we propose an online correction framework that requires no retraining: at each prediction step, it formulates a linearized inverse problem based on the PDE residual to enforce physical consistency of the solution. Crucially, we introduce an offline warm-up phase that precomputes and caches the Jacobian matrix and its pseudo-inverse, reducing computational overhead by two orders of magnitude. The method is architecture-agnostic—compatible with Fourier neural operators, U-Nets, and Vision Transformers—and achieves up to 100× error reduction on benchmark PDEs including the Navier–Stokes, wave, and Kuramoto–Sivashinsky equations. Inference latency increases by less than 5%, demonstrating substantial improvements in both long-term stability and accuracy of neural PDE solvers.

Technology Category

Application Category

📝 Abstract
Neural networks have emerged as powerful surrogates for solving partial differential equations (PDEs), offering significant computational speedups over traditional methods. However, these models suffer from a critical limitation: error accumulation during long-term rollouts, where small inaccuracies compound exponentially, eventually causing complete divergence from physically valid solutions. We present PhysicsCorrect, a training-free correction framework that enforces PDE consistency at each prediction step by formulating correction as a linearized inverse problem based on PDE residuals. Our key innovation is an efficient caching strategy that precomputes the Jacobian and its pseudoinverse during an offline warm-up phase, reducing computational overhead by two orders of magnitude compared to standard correction approaches. Across three representative PDE systems -- Navier-Stokes fluid dynamics, wave equations, and the chaotic Kuramoto-Sivashinsky equation -- PhysicsCorrect reduces prediction errors by up to 100x while adding negligible inference time (under 5%). The framework integrates seamlessly with diverse architectures including Fourier Neural Operators, UNets, and Vision Transformers, effectively transforming unstable neural surrogates into reliable simulation tools that bridge the gap between deep learning's computational efficiency and the physical fidelity demanded by practical scientific applications.
Problem

Research questions and friction points this paper is trying to address.

Prevent error accumulation in neural PDE simulations
Enforce PDE consistency without training overhead
Stabilize long-term predictions for diverse PDE systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free correction via linearized inverse problem
Efficient caching with precomputed Jacobian pseudoinverse
Seamless integration with diverse neural architectures
🔎 Similar Papers
No similar papers found.