Dynamical Implicit Neural Representations

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit Neural Representations (INRs) suffer from spectral bias, limiting their ability to model high-frequency visual and geometric signals. To address this, we propose Dynamic Implicit Neural Representations (DINR), the first INR framework formulated as a continuous-time dynamical system, where features evolve explicitly over time to mitigate spectral bias. Our method introduces dynamical complexity regularization to balance expressivity and generalization, supported theoretically by Rademacher complexity and the neural tangent kernel. DINR employs a differentiable continuous-dynamics architecture enabling end-to-end training. Extensive experiments on image representation, field reconstruction, and data compression demonstrate that DINR significantly improves convergence speed, signal fidelity, and generalization performance—consistently outperforming static INR baselines across all tasks.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) provide a powerful continuous framework for modeling complex visual and geometric signals, but spectral bias remains a fundamental challenge, limiting their ability to capture high-frequency details. Orthogonal to existing remedy strategies, we introduce Dynamical Implicit Neural Representations (DINR), a new INR modeling framework that treats feature evolution as a continuous-time dynamical system rather than a discrete stack of layers. This dynamical formulation mitigates spectral bias by enabling richer, more adaptive frequency representations through continuous feature evolution. Theoretical analysis based on Rademacher complexity and the Neural Tangent Kernel demonstrates that DINR enhances expressivity and improves training dynamics. Moreover, regularizing the complexity of the underlying dynamics provides a principled way to balance expressivity and generalization. Extensive experiments on image representation, field reconstruction, and data compression confirm that DINR delivers more stable convergence, higher signal fidelity, and stronger generalization than conventional static INRs.
Problem

Research questions and friction points this paper is trying to address.

Addresses spectral bias in Implicit Neural Representations
Introduces a continuous-time dynamical system for feature evolution
Enhances expressivity and generalization in signal modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

DINR treats feature evolution as continuous-time dynamical system
DINR mitigates spectral bias through adaptive frequency representations
DINR regularizes dynamics complexity to balance expressivity generalization
🔎 Similar Papers
No similar papers found.
Yesom Park
Yesom Park
University of California, Los Angeles
K
Kelvin Kan
University of California, Los Angeles
Thomas Flynn
Thomas Flynn
Brookhaven National Laboratory
Y
Yi Huang
Brookhaven National Laboratory
Shinjae Yoo
Shinjae Yoo
Brookhaven National Lab
Machine Learning
S
Stanley Osher
University of California, Los Angeles
X
Xihaier Luo
Brookhaven National Laboratory