Subtractive Modulative Network with Learnable Periodic Activations

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing reconstruction accuracy and parameter efficiency in implicit neural representations. Inspired by subtractive synthesis, we propose a novel architecture that employs learnable periodic activation layers to generate multi-frequency bases and introduces a modulation mask module to actively excite higher-order harmonics, thereby establishing an efficient signal modeling pipeline. To our knowledge, this is the first approach to integrate the principles of subtractive synthesis into implicit neural representations, achieving a synergistic optimization of representational capacity and parameter efficiency. Experimental results demonstrate that our method achieves over 40 dB PSNR in image reconstruction and significantly outperforms existing approaches in novel-view synthesis on 3D NeRF benchmarks, while maintaining a compact model size.

Technology Category

Application Category

📝 Abstract
We propose the Subtractive Modulative Network (SMN), a novel, parameter-efficient Implicit Neural Representation (INR) architecture inspired by classical subtractive synthesis. The SMN is designed as a principled signal processing pipeline, featuring a learnable periodic activation layer (Oscillator) that generates a multi-frequency basis, and a series of modulative mask modules (Filters) that actively generate high-order harmonics. We provide both theoretical analysis and empirical validation for our design. Our SMN achieves a PSNR of $40+$ dB on two image datasets, comparing favorably against state-of-the-art methods in terms of both reconstruction accuracy and parameter efficiency. Furthermore, consistent advantage is observed on the challenging 3D NeRF novel view synthesis task. Supplementary materials are available at https://inrainbws.github.io/smn/.
Problem

Research questions and friction points this paper is trying to address.

Implicit Neural Representation
Parameter Efficiency
Signal Reconstruction
Periodic Activation
Neural Rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Subtractive Modulative Network
Learnable Periodic Activations
Implicit Neural Representation
Parameter Efficiency
Harmonic Generation
T
Tiou Wang
School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden
Z
Zhuoqian Yang
School of Computer and Communication Sciences, EPFL, Switzerland
M
Markus Flierl
School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden
Mathieu Salzmann
Mathieu Salzmann
EPFL
Computer visionmachine learning
Sabine Süsstrunk
Sabine Süsstrunk
Professor, Images and Visual Representation Lab, EPFL
Computational PhotographyComputational ImagingColor ImagingVision Science