🤖 AI Summary
Traditional numerical PDE solvers incur prohibitive computational costs in parametric studies and design optimization, severely limiting efficiency. To address this, we propose an equation-reconstruction neural operator preconditioning framework: the parametric residual is reformulated as a source term and embedded into iterative solvers to provide high-quality initial guesses; neural operators—such as DeepONet or the Fourier Neural Operator (FNO)—are trained with minimal data and tailored regularization to overcome extrapolation bottlenecks, enabling zero-shot generalization across diverse parameter configurations from a single training run. The method rigorously preserves physical constraints and maintains full-order accuracy. Evaluated on canonical problems—including neutron transport—it reduces total computational time by approximately 50%. It supports arbitrary parameter distributions and multi-eigenvalue problems, significantly enhancing both the generality and practicality of parametric PDE solving.
📝 Abstract
The computational overhead of traditional numerical solvers for partial differential equations (PDEs) remains a critical bottleneck for large-scale parametric studies and design optimization. We introduce a Minimal-Data Parametric Neural Operator Preconditioning (MD-PNOP) framework, which establishes a new paradigm for accelerating parametric PDE solvers while strictly preserving physical constraints. The key idea is to recast the residual from parameter deviation as additional source term, where any trained neural operator can be used to refine the solution in an offline fashion. This directly addresses the fundamental extrapolation limitation of neural operators, enabling extrapolative generalization of any neural operator trained at a single parameter setting across a wide range of configurations without any retraining. The neural operator predictions are then embedded into iterative PDE solvers as improved initial guesses, thereby reducing convergence iterations without sacrificing accuracy. Unlike purely data-driven approaches, MD-PNOP guarantees that the governing equations remain fully enforced, eliminating concerns about loss of physics or interpretability. The framework is architecture-agnostic and is demonstrated using both Deep Operator Networks (DeepONet) and Fourier Neural Operators (FNO) for Boltzmann transport equation solvers in neutron transport applications. We demonstrated that neural operators trained on a single set of constant parameters successfully accelerate solutions with heterogeneous, sinusoidal, and discontinuous parameter distributions. Besides, MD-PNOP consistently achieves ~50% reduction in computational time while maintaining full order fidelity for fixed-source, single-group eigenvalue, and multigroup coupled eigenvalue problems.