Tuning the Frequencies: Robust Training for Sinusoidal Neural Networks

๐Ÿ“… 2024-07-30
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the instability in initialization and training, lack of theoretical foundations, and susceptibility to overfitting of Sine Representation Networks (SIRENs) in implicit neural representation (INR) tasks. We propose the first spectral analysis framework grounded in amplitude-phase decomposition. Specifically, we establish the theoretical result that the output spectrum of a sine-based MLP consists solely of integer linear combinations of input frequenciesโ€”a foundational insight enabling the design of spectral-aware initialization and a dynamic spectral bound constraint mechanism during training (the TUNER algorithm). Our method significantly enhances training stability and convergence speed, achieves high-fidelity reconstruction of fine details in image and geometric signal modeling, and effectively mitigates overfitting. By bridging spectral theory with INR optimization, TUNER provides an interpretable, controllable, and frequency-domain robust training paradigm for implicit neural representations.

Technology Category

Application Category

๐Ÿ“ Abstract
Sinusoidal neural networks have been shown effective as implicit neural representations (INRs) of low-dimensional signals, due to their smoothness and high representation capacity. However, initializing and training them remain empirical tasks which lack on deeper understanding to guide the learning process. To fill this gap, our work introduces a theoretical framework that explains the capacity property of sinusoidal networks and offers robust control mechanisms for initialization and training. Our analysis is based on a novel amplitude-phase expansion of the sinusoidal multilayer perceptron, showing how its layer compositions produce a large number of new frequencies expressed as integer combinations of the input frequencies. This relationship can be directly used to initialize the input neurons, as a form of spectral sampling, and to bound the network's spectrum while training. Our method, referred to as TUNER (TUNing sinusoidal nEtwoRks), greatly improves the stability and convergence of sinusoidal INR training, leading to detailed reconstructions, while preventing overfitting.
Problem

Research questions and friction points this paper is trying to address.

Improves stability of sinusoidal networks
Enhances training convergence for INRs
Prevents overfitting in detailed reconstructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theoretical framework for sinusoidal networks
Amplitude-phase expansion analysis
Spectral sampling for network initialization
๐Ÿ”Ž Similar Papers