INC: An Indirect Neural Corrector for Auto-Regressive Hybrid PDE Solvers

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the accumulation of autoregressive errors—severely amplified in long-term rollouts of chaotic systems—caused by directly applying neural corrections to state variables in hybrid PDE solvers, this paper proposes an **indirect correction mechanism**: learned correction terms are embedded into the governing equations rather than applied to numerical solutions. This approach suppresses error propagation at the modeling level; theoretical analysis shows it reduces the error amplification rate to $O(Delta t^{-1} + L)$. The method is solver- and architecture-agnostic. Leveraging numerical differentiation, automatic differentiation, and end-to-end differentiable modeling, we validate it across multiscale physical systems—from 1D chaotic dynamics to 3D turbulent flows. Results demonstrate up to a 158.7% improvement in R², effective suppression of coarse-graining-induced blow-ups, and speedups of several orders of magnitude in 3D turbulence simulation—achieving high efficiency, strong stability, and physical consistency.

Technology Category

Application Category

📝 Abstract
When simulating partial differential equations, hybrid solvers combine coarse numerical solvers with learned correctors. They promise accelerated simulations while adhering to physical constraints. However, as shown in our theoretical framework, directly applying learned corrections to solver outputs leads to significant autoregressive errors, which originate from amplified perturbations that accumulate during long-term rollouts, especially in chaotic regimes. To overcome this, we propose the Indirect Neural Corrector ((mathrm{INC})), which integrates learned corrections into the governing equations rather than applying direct state updates. Our key insight is that (mathrm{INC}) reduces the error amplification on the order of (Δt^{-1} + L), where (Δt) is the timestep and $L$ the Lipschitz constant. At the same time, our framework poses no architectural requirements and integrates seamlessly with arbitrary neural networks and solvers. We test (mathrm{INC}) in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence. INC improves the long-term trajectory performance ((R^2)) by up to 158.7%, stabilizes blowups under aggressive coarsening, and for complex 3D turbulence cases yields speed-ups of several orders of magnitude. INC thus enables stable, efficient PDE emulation with formal error reduction, paving the way for faster scientific and engineering simulations with reliable physics guarantees. Our source code is available at https://github.com/tum-pbs/INC
Problem

Research questions and friction points this paper is trying to address.

Reduces autoregressive errors in hybrid PDE solvers caused by error accumulation
Integrates neural corrections into governing equations instead of direct state updates
Enables stable long-term simulations with formal error reduction guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates neural corrections into governing equations
Reduces error amplification with formal guarantees
Works with arbitrary neural networks and solvers
🔎 Similar Papers
No similar papers found.
H
Hao Wei
Technical University of Munich
A
Aleksandra Franz
Technical University of Munich
B
Bjoern List
Technical University of Munich
Nils Thuerey
Nils Thuerey
Technical University of Munich
Scientific Machine LearningNumerical SimulationPDEsFluid MechanicsComputer Graphics