🤖 AI Summary
This work investigates the Neural Tangent Kernel (NTK) theory for deep neural networks trained under physics-informed losses involving differential operators, focusing on initialization, training dynamics, convergence, and explicit NTK structure.
Method: We establish the first analytical framework for NTK under differential-operator-driven losses, integrating NTK theory, spectral analysis, and Physics-Informed Neural Network (PINN) modeling to rigorously derive the closed-form expression and spectral properties of the NTK.
Contribution/Results: We theoretically prove that physics-informed losses do not universally accelerate eigenvalue decay or exacerbate spectral bias; instead, convergence behavior is jointly governed by how differential operators are embedded and the loss structure. Experiments validate the predicted spectral decay rates and bias patterns. This work provides the first systematic NTK-level theoretical explanation for generalization and optimization dynamics in PINNs, extending NTK analysis beyond conventional supervised learning settings.
📝 Abstract
Spectral bias is a significant phenomenon in neural network training and can be explained by neural tangent kernel (NTK) theory. In this work, we develop the NTK theory for deep neural networks with physics-informed loss, providing insights into the convergence of NTK during initialization and training, and revealing its explicit structure. We find that, in most cases, the differential operators in the loss function do not induce a faster eigenvalue decay rate and stronger spectral bias. Some experimental results are also presented to verify the theory.