🤖 AI Summary
Sine-based neural fields (SNFs) suffer from high training costs and slow convergence, and existing signal-propagation–based weight initialization schemes are suboptimal. Method: We observe that applying a constant scaling factor solely to the weights of non-output layers significantly accelerates training; based on this insight, we propose a simple, general-purpose weight-scaling initialization scheme that requires no architectural or optimizer modifications. Contribution/Results: Theoretical analysis—grounded in spectral bias modeling—and empirical condition-number evaluation demonstrate that our method effectively mitigates spectral bias and alleviates ill-conditioning of the optimization landscape. Across diverse datasets, it consistently achieves approximately 10× faster convergence, outperforming state-of-the-art neural field architectures in training speed. Our approach establishes a lightweight, plug-and-play initialization paradigm for efficient SNF training.
📝 Abstract
Neural fields are an emerging paradigm that represent data as continuous functions parameterized by neural networks. Despite many advantages, neural fields often have a high training cost, which prevents a broader adoption. In this paper, we focus on a popular family of neural fields, called sinusoidal neural fields (SNFs), and study how it should be initialized to maximize the training speed. We find that the standard initialization scheme for SNFs -- designed based on the signal propagation principle -- is suboptimal. In particular, we show that by simply multiplying each weight (except for the last layer) by a constant, we can accelerate SNF training by 10$ imes$. This method, coined $ extit{weight scaling}$, consistently provides a significant speedup over various data domains, allowing the SNFs to train faster than more recently proposed architectures. To understand why the weight scaling works well, we conduct extensive theoretical and empirical analyses which reveal that the weight scaling not only resolves the spectral bias quite effectively but also enjoys a well-conditioned optimization trajectory.