🤖 AI Summary
To address the accumulation of autoregressive errors—severely amplified in long-term rollouts of chaotic systems—caused by directly applying neural corrections to state variables in hybrid PDE solvers, this paper proposes an **indirect correction mechanism**: learned correction terms are embedded into the governing equations rather than applied to numerical solutions. This approach suppresses error propagation at the modeling level; theoretical analysis shows it reduces the error amplification rate to $O(Delta t^{-1} + L)$. The method is solver- and architecture-agnostic. Leveraging numerical differentiation, automatic differentiation, and end-to-end differentiable modeling, we validate it across multiscale physical systems—from 1D chaotic dynamics to 3D turbulent flows. Results demonstrate up to a 158.7% improvement in R², effective suppression of coarse-graining-induced blow-ups, and speedups of several orders of magnitude in 3D turbulence simulation—achieving high efficiency, strong stability, and physical consistency.
📝 Abstract
When simulating partial differential equations, hybrid solvers combine coarse numerical solvers with learned correctors. They promise accelerated simulations while adhering to physical constraints. However, as shown in our theoretical framework, directly applying learned corrections to solver outputs leads to significant autoregressive errors, which originate from amplified perturbations that accumulate during long-term rollouts, especially in chaotic regimes. To overcome this, we propose the Indirect Neural Corrector ((mathrm{INC})), which integrates learned corrections into the governing equations rather than applying direct state updates. Our key insight is that (mathrm{INC}) reduces the error amplification on the order of (Δt^{-1} + L), where (Δt) is the timestep and $L$ the Lipschitz constant. At the same time, our framework poses no architectural requirements and integrates seamlessly with arbitrary neural networks and solvers. We test (mathrm{INC}) in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence. INC improves the long-term trajectory performance ((R^2)) by up to 158.7%, stabilizes blowups under aggressive coarsening, and for complex 3D turbulence cases yields speed-ups of several orders of magnitude. INC thus enables stable, efficient PDE emulation with formal error reduction, paving the way for faster scientific and engineering simulations with reliable physics guarantees. Our source code is available at https://github.com/tum-pbs/INC