Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) suffer from the spectral bias of MLPs, hindering high-fidelity reconstruction of high-frequency details. To address this, we propose an inductive gradient adjustment method grounded in the empirical Neural Tangent Kernel (eNTK), which— for the first time—formally bridges spectral bias and training dynamics. Our approach dynamically designs a gradient transformation matrix to mitigate bias directionally, without altering network architecture. By establishing a linearized training dynamics model, it enables generalized gradient optimization across diverse INR architectures and tasks. Experiments demonstrate consistent improvements across multiple INR variants (e.g., SIREN, Fourier Features) and reconstruction tasks (images and videos): reconstructed outputs exhibit richer texture, sharper edges, and superior quantitative performance—achieving higher PSNR and lower LPIPS than state-of-the-art training strategies.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs), as a versatile representation paradigm, have achieved success in various computer vision tasks. Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designing MLPs with sophisticated architectures or repurposing training techniques for highly accurate INRs. In this paper, we delve into the linear dynamics model of MLPs and theoretically identify the empirical Neural Tangent Kernel (eNTK) matrix as a reliable link between spectral bias and training dynamics. Based on eNTK matrix, we propose a practical inductive gradient adjustment method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix. We evaluate our method on different INRs tasks with various INR architectures and compare to existing training techniques. The superior representation performance clearly validates the advantage of our proposed method. Armed with our gradient adjustment method, better INRs with more enhanced texture details and sharpened edges can be learned from data by tailored improvements on spectral bias.
Problem

Research questions and friction points this paper is trying to address.

Addresses spectral bias in Implicit Neural Representations (INRs)
Proposes Inductive Gradient Adjustment (IGA) to improve spectral bias
Enhances INRs for better texture details and sharpened edges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inductive Gradient Adjustment for spectral bias
eNTK-based gradient transformation matrix
Improved INRs with enhanced texture details
🔎 Similar Papers
No similar papers found.