🤖 AI Summary
To address the trade-off between network scale and reconstruction quality in implicit neural representations (INRs), this paper proposes AIRe, an adaptive training framework. AIRe jointly optimizes network architecture and input frequency during training: it performs dynamic structured pruning based on neuron contribution, while simultaneously densifying input frequencies guided by spectral analysis—thereby co-regulating representational capacity and model complexity. Its key innovation lies in the first deep coupling of frequency-aware input modulation with parameter sparsification, enabling end-to-end adaptive architectural evolution. Evaluated on image and signed distance function (SDF) reconstruction tasks, AIRe achieves up to 62% reduction in model parameters while maintaining or surpassing state-of-the-art methods in reconstruction fidelity, as measured by PSNR and Chamfer distance.
📝 Abstract
Encoding input coordinates with sinusoidal functions into multilayer perceptrons (MLPs) has proven effective for implicit neural representations (INRs) of low-dimensional signals, enabling the modeling of high-frequency details. However, selecting appropriate input frequencies and architectures while managing parameter redundancy remains an open challenge, often addressed through heuristics and heavy hyperparameter optimization schemes. In this paper, we introduce AIRe ($ extbf{A}$daptive $ extbf{I}$mplicit neural $ extbf{Re}$presentation), an adaptive training scheme that refines the INR architecture over the course of optimization. Our method uses a neuron pruning mechanism to avoid redundancy and input frequency densification to improve representation capacity, leading to an improved trade-off between network size and reconstruction quality. For pruning, we first identify less-contributory neurons and apply a targeted weight decay to transfer their information to the remaining neurons, followed by structured pruning. Next, the densification stage adds input frequencies to spectrum regions where the signal underfits, expanding the representational basis. Through experiments on images and SDFs, we show that AIRe reduces model size while preserving, or even improving, reconstruction quality. Code and pretrained models will be released for public use.