🤖 AI Summary
Periodic activation-based implicit neural representations (e.g., SIREN, FINER) suffer from intra-layer neuronal frequency overlap, causing feature redundancy and limiting MLP expressivity. To address this, we propose a Nyquist criterion-driven neuron-level frequency multiplier mechanism: it adaptively assigns frequency scaling factors to individual neurons based on the input signal’s bandwidth—eliminating the need for manual hyperparameter tuning. Inspired by the discrete sine transform, our approach enhances spectral diversity while preserving network depth and computational efficiency. Experiments demonstrate that our method reduces feature redundancy by nearly 50%, achieving consistent and significant improvements over baselines in 1D audio reconstruction, 2D image fitting, 3D shape representation, and NeRF tasks—yielding higher reconstruction accuracy and superior representational efficiency.
📝 Abstract
Existing periodic activation-based implicit neural representation (INR) networks, such as SIREN and FINER, suffer from hidden feature redundancy, where neurons within a layer capture overlapping frequency components due to the use of a fixed frequency multiplier. This redundancy limits the expressive capacity of multilayer perceptrons (MLPs). Drawing inspiration from classical signal processing methods such as the Discrete Sine Transform (DST), we propose FM-SIREN and FM-FINER, which assign Nyquist-informed, neuron-specific frequency multipliers to periodic activations. Unlike existing approaches, our design introduces frequency diversity without requiring hyperparameter tuning or additional network depth. This simple yet principled modification reduces the redundancy of features by nearly 50% and consistently improves signal reconstruction across diverse INR tasks, including fitting 1D audio, 2D image and 3D shape, and synthesis of neural radiance fields (NeRF), outperforming their baseline counterparts while maintaining efficiency.