π€ AI Summary
This paper investigates the intrinsic relationship between the Neural Tangent Kernel (NTK) and the Fisher Information Matrix (FIM) for two-layer ReLU networks with random hidden-layer weights. Using random matrix theory and spectral analysis, we establish, for the first time, an explicit linear transformation between the NTK and FIM in the infinite-width limit. We further derive the exact spectral decomposition of the NTK, obtaining closed-form expressions for its principal eigenvalues and corresponding eigenfunctions, and reveal its low-rank structure. Leveraging this spectral characterization, we develop an efficient approximation formula for functions representable by the network. Our core contributions are threefold: (1) clarifying the fundamental equivalence between the NTK and FIM; (2) providing an analytically tractable spectral theory for the NTK; and (3) achieving a tight functional-space approximation that precisely characterizes the networkβs expressive power.
π Abstract
Fisher information matrices and neural tangent kernels (NTK) for 2-layer ReLU networks with random hidden weight are argued. We discuss the relation between both notions as a linear transformation and show that spectral decomposition of NTK with concrete forms of eigenfunctions with major eigenvalues. We also obtain an approximation formula of the functions presented by the 2-layer neural networks.