FM-SIREN & FM-FINER: Nyquist-Informed Frequency Multiplier for Implicit Neural Representation with Periodic Activation

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Periodic activation-based implicit neural representations (e.g., SIREN, FINER) suffer from intra-layer neuronal frequency overlap, causing feature redundancy and limiting MLP expressivity. To address this, we propose a Nyquist criterion-driven neuron-level frequency multiplier mechanism: it adaptively assigns frequency scaling factors to individual neurons based on the input signal’s bandwidth—eliminating the need for manual hyperparameter tuning. Inspired by the discrete sine transform, our approach enhances spectral diversity while preserving network depth and computational efficiency. Experiments demonstrate that our method reduces feature redundancy by nearly 50%, achieving consistent and significant improvements over baselines in 1D audio reconstruction, 2D image fitting, 3D shape representation, and NeRF tasks—yielding higher reconstruction accuracy and superior representational efficiency.

Technology Category

Application Category

📝 Abstract
Existing periodic activation-based implicit neural representation (INR) networks, such as SIREN and FINER, suffer from hidden feature redundancy, where neurons within a layer capture overlapping frequency components due to the use of a fixed frequency multiplier. This redundancy limits the expressive capacity of multilayer perceptrons (MLPs). Drawing inspiration from classical signal processing methods such as the Discrete Sine Transform (DST), we propose FM-SIREN and FM-FINER, which assign Nyquist-informed, neuron-specific frequency multipliers to periodic activations. Unlike existing approaches, our design introduces frequency diversity without requiring hyperparameter tuning or additional network depth. This simple yet principled modification reduces the redundancy of features by nearly 50% and consistently improves signal reconstruction across diverse INR tasks, including fitting 1D audio, 2D image and 3D shape, and synthesis of neural radiance fields (NeRF), outperforming their baseline counterparts while maintaining efficiency.
Problem

Research questions and friction points this paper is trying to address.

Reduces hidden feature redundancy in periodic activation networks
Enhances expressive capacity of multilayer perceptrons (MLPs)
Improves signal reconstruction across diverse implicit neural representation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Assigns neuron-specific frequency multipliers to activations
Reduces feature redundancy by nearly fifty percent
Improves signal reconstruction across diverse INR tasks
🔎 Similar Papers
No similar papers found.