NTK-Guided Implicit Neural Teaching

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and slow convergence in training implicit neural representations (INRs) for high-resolution signal modeling—stemming from inefficient coordinate sampling—this paper proposes a neural tangent kernel (NTK)-guided dynamic coordinate selection mechanism. Our method quantifies sample importance via the NTK-calibrated norm of the loss gradient, jointly accounting for reconstruction error and inter-coordinate coupling to adaptively select coordinates that contribute most to global function updates. Unlike fixed or heuristic sampling strategies, our approach significantly improves training efficiency under standard MLP architectures. Experiments demonstrate an average 47% reduction in training time while maintaining or even improving reconstruction quality, establishing it as the new state-of-the-art (SOTA) among sampling-based INR acceleration methods.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) parameterize continuous signals via multilayer perceptrons (MLPs), enabling compact, resolution-independent modeling for tasks like image, audio, and 3D reconstruction. However, fitting high-resolution signals demands optimizing over millions of coordinates, incurring prohibitive computational costs. To address it, we propose NTK-Guided Implicit Neural Teaching (NINT), which accelerates training by dynamically selecting coordinates that maximize global functional updates. Leveraging the Neural Tangent Kernel (NTK), NINT scores examples by the norm of their NTK-augmented loss gradients, capturing both fitting errors and heterogeneous leverage (self-influence and cross-coordinate coupling). This dual consideration enables faster convergence compared to existing methods. Through extensive experiments, we demonstrate that NINT significantly reduces training time by nearly half while maintaining or improving representation quality, establishing state-of-the-art acceleration among recent sampling-based strategies.
Problem

Research questions and friction points this paper is trying to address.

Accelerating training of Implicit Neural Representations for high-resolution signals
Reducing computational costs by optimizing coordinate selection dynamically
Maintaining representation quality while achieving faster convergence rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

NTK-Guided Implicit Neural Teaching accelerates training
Dynamically selects coordinates maximizing functional updates
Uses Neural Tangent Kernel to score loss gradients
🔎 Similar Papers
No similar papers found.
C
Chen Zhang
The University of Hong Kong
W
Wei Zuo
The University of Hong Kong
B
Bingyang Cheng
The University of Hong Kong
Yikun Wang
Yikun Wang
fudan university
Computer vision | Natural language processing
W
Wei-Bin Kou
The University of Hong Kong
Y
Yik-Chung Wu
The University of Hong Kong
N
Ngai Wong
The University of Hong Kong