🤖 AI Summary
This work addresses the spectral bias in physics-informed neural networks and neural operators when solving partial differential equations, wherein low-frequency components converge preferentially, hindering accurate modeling of high-frequency details. The study systematically demonstrates that this bias stems from optimization dynamics rather than merely limitations in representational capacity. To mitigate this issue, the authors propose a novel mechanism combining second-order optimization with a spectrum-aware loss function. Leveraging diagnostic tools such as frequency-decomposed error analysis, Barron norms, and higher-order statistical moments, the method significantly improves early-stage recovery accuracy of high-frequency modes across benchmark problems—including the KdV equation, wave equation, and turbulent flow reconstruction—without incurring additional inference overhead.
📝 Abstract
Solving partial differential equations (PDEs) by neural networks as well as Kolmogorov-Arnold Networks (KANs), including physics-informed neural networks (PINNs), physics-informed KANs (PIKANs), and neural operators, are known to exhibit spectral bias, whereby low-frequency components of the solution are learned significantly faster than high-frequency modes. While spectral bias is often treated as an intrinsic representational limitation of neural architectures, its interaction with optimization dynamics and physics-based loss formulations remains poorly understood. In this work, we provide a systematic investigation of spectral bias in physics-informed and operator learning frameworks, with emphasis on the coupled roles of network architecture, activation functions, loss design, and optimization strategy. We quantify spectral bias through frequency-resolved error metrics, Barron-norm diagnostics, and higher-order statistical moments, enabling a unified analysis across elliptic, hyperbolic, and dispersive PDEs. Through diverse benchmark problems, including the Korteweg-de Vries, wave and steady-state diffusion-reaction equations, turbulent flow reconstruction, and earthquake dynamics, we demonstrate that spectral bias is not simply representational but fundamentally dynamical. In particular, second-order optimization methods substantially alter the spectral learning order, enabling earlier and more accurate recovery of high-frequency modes for all PDE types. For neural operators, we further show that spectral bias is dependent on the neural operator architecture and can also be effectively mitigated through spectral-aware loss formulations without increasing the inference cost.