🤖 AI Summary
This work addresses the challenge of spurious fixed points in hybrid deep learning–based PDE solvers, which often arise from a mismatch between training objectives and iterative strategies, leading to large physical residuals and unreliable convergence. To overcome this limitation, the authors propose Physics-Aware Anderson Acceleration (PA-AA), which integrates neural operators—such as DeepONet and Fourier Neural Operators—with classical numerical methods. Unlike conventional acceleration techniques that focus on update increments, PA-AA explicitly targets the reduction of physical residuals, thereby aligning the optimization trajectory with physical consistency. Experimental results demonstrate that PA-AA significantly enhances convergence reliability, avoids residual stagnation within fewer iterations, and effectively mitigates the failure modes commonly observed in traditional hybrid solvers.
📝 Abstract
Deep learning-based hybrid iterative methods (DL-HIMs) integrate classical numerical solvers with neural operators, utilizing their complementary spectral biases to accelerate convergence. Despite this promise, many DL-HIMs stagnate at false fixed points where neural updates vanish while the physical residual remains large, raising questions about reliability in scientific computing. In this paper, we provide evidence that performance is highly sensitive to training paradigms and update strategies, even when the neural architecture is fixed. Through a detailed study of a DeepONet-based hybrid iterative numerical transferable solver (HINTS) and an FFT-based Fourier neural solver (FNS), we show that significant physical residuals can persist when training objectives are not aligned with solver dynamics and problem physics. We further examine Anderson acceleration (AA) and demonstrate that its classical form is ill-suited for nonlinear neural operators. To overcome this, we introduce physics-aware Anderson acceleration (PA-AA), which minimizes the physical residual rather than the fixed-point update. Numerical experiments confirm that PA-AA restores reliable convergence in substantially fewer iterations. These findings provide a concrete answer to ongoing controversies surrounding AI-based PDE solvers: reliability hinges not only on architectures but on physically informed training and iteration design.