The Best of Both Worlds: Hybridizing Neural Operators and Solvers for Stable Long-Horizon Inference

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operators (NOs) suffer from error accumulation and instability in long-term forecasting of nonlinear time-varying PDEs due to autoregressive inference. To address this, we propose ANCHOR—an online adaptive hybrid inference framework. Methodologically, ANCHOR integrates a pre-trained NO with physics-informed residual modeling and multi-scale PDE representation, while introducing two key innovations: (i) a ground-truth-free PDE residual estimator based on exponential moving average, enabling unsupervised online error monitoring; and (ii) the first incorporation of numerical analysis–inspired adaptive time stepping into the NO inference loop, dynamically triggering classical solver corrections. Evaluated on 1D/2D Burgers’, 2D Allen–Cahn, and 3D heat equations, ANCHOR improves long-term prediction stability by over 2× compared to standard NOs, while achieving inference speeds 2–3 orders of magnitude faster than high-accuracy numerical solvers.

Technology Category

Application Category

📝 Abstract
Numerical simulation of time-dependent partial differential equations (PDEs) is central to scientific and engineering applications, but high-fidelity solvers are often prohibitively expensive for long-horizon or time-critical settings. Neural operator (NO) surrogates offer fast inference across parametric and functional inputs; however, most autoregressive NO frameworks remain vulnerable to compounding errors, and ensemble-averaged metrics provide limited guarantees for individual inference trajectories. In practice, error accumulation can become unacceptable beyond the training horizon, and existing methods lack mechanisms for online monitoring or correction. To address this gap, we propose ANCHOR (Adaptive Numerical Correction for High-fidelity Operator Rollouts), an online, instance-aware hybrid inference framework for stable long-horizon prediction of nonlinear, time-dependent PDEs. ANCHOR treats a pretrained NO as the primary inference engine and adaptively couples it with a classical numerical solver using a physics-informed, residual-based error estimator. Inspired by adaptive time-stepping in numerical analysis, ANCHOR monitors an exponential moving average (EMA) of the normalized PDE residual to detect accumulating error and trigger corrective solver interventions without requiring access to ground-truth solutions. We show that the EMA-based estimator correlates strongly with the true relative L2 error, enabling data-free, instance-aware error control during inference. Evaluations on four canonical PDEs: 1D and 2D Burgers', 2D Allen-Cahn, and 3D heat conduction, demonstrate that ANCHOR reliably bounds long-horizon error growth, stabilizes extrapolative rollouts, and significantly improves robustness over standalone neural operators, while remaining substantially more efficient than high-fidelity numerical solvers.
Problem

Research questions and friction points this paper is trying to address.

Hybridizes neural operators with classical solvers for stable long-term PDE predictions
Addresses compounding errors in neural operators via adaptive, online error monitoring
Ensures robust, efficient inference for time-dependent PDEs without ground-truth data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid framework combines neural operators with classical solvers
Adaptive error monitoring triggers corrective solver interventions
Data-free instance-aware error control stabilizes long-horizon predictions
R
Rajyasri Roy
Department of Civil and Systems Engineering, Johns Hopkins University
D
Dibyajyoti Nayak
Department of Civil and Systems Engineering, Johns Hopkins University
Somdatta Goswami
Somdatta Goswami
Assistant Professor, Civil and Systems Engineering, Johns Hopkins University
Deep LearningPhysics-informed MLComputational MechanicsFracture Mechanics