🤖 AI Summary
Implicit Neural Representations (INRs) suffer from the spectral bias of MLPs, hindering high-fidelity reconstruction of high-frequency details. To address this, we propose an inductive gradient adjustment method grounded in the empirical Neural Tangent Kernel (eNTK), which— for the first time—formally bridges spectral bias and training dynamics. Our approach dynamically designs a gradient transformation matrix to mitigate bias directionally, without altering network architecture. By establishing a linearized training dynamics model, it enables generalized gradient optimization across diverse INR architectures and tasks. Experiments demonstrate consistent improvements across multiple INR variants (e.g., SIREN, Fourier Features) and reconstruction tasks (images and videos): reconstructed outputs exhibit richer texture, sharper edges, and superior quantitative performance—achieving higher PSNR and lower LPIPS than state-of-the-art training strategies.
📝 Abstract
Implicit Neural Representations (INRs), as a versatile representation paradigm, have achieved success in various computer vision tasks. Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designing MLPs with sophisticated architectures or repurposing training techniques for highly accurate INRs. In this paper, we delve into the linear dynamics model of MLPs and theoretically identify the empirical Neural Tangent Kernel (eNTK) matrix as a reliable link between spectral bias and training dynamics. Based on eNTK matrix, we propose a practical inductive gradient adjustment method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix. We evaluate our method on different INRs tasks with various INR architectures and compare to existing training techniques. The superior representation performance clearly validates the advantage of our proposed method. Armed with our gradient adjustment method, better INRs with more enhanced texture details and sharpened edges can be learned from data by tailored improvements on spectral bias.