🤖 AI Summary
This work addresses the challenge of balancing reconstruction accuracy and parameter efficiency in implicit neural representations. Inspired by subtractive synthesis, we propose a novel architecture that employs learnable periodic activation layers to generate multi-frequency bases and introduces a modulation mask module to actively excite higher-order harmonics, thereby establishing an efficient signal modeling pipeline. To our knowledge, this is the first approach to integrate the principles of subtractive synthesis into implicit neural representations, achieving a synergistic optimization of representational capacity and parameter efficiency. Experimental results demonstrate that our method achieves over 40 dB PSNR in image reconstruction and significantly outperforms existing approaches in novel-view synthesis on 3D NeRF benchmarks, while maintaining a compact model size.
📝 Abstract
We propose the Subtractive Modulative Network (SMN), a novel, parameter-efficient Implicit Neural Representation (INR) architecture inspired by classical subtractive synthesis. The SMN is designed as a principled signal processing pipeline, featuring a learnable periodic activation layer (Oscillator) that generates a multi-frequency basis, and a series of modulative mask modules (Filters) that actively generate high-order harmonics. We provide both theoretical analysis and empirical validation for our design. Our SMN achieves a PSNR of $40+$ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency. Furthermore, consistent advantage is observed on the challenging 3D NeRF novel view synthesis task. Supplementary materials are available at https://inrainbws.github.io/smn/.