Adaptive Training of INRs via Pruning and Densification

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between network scale and reconstruction quality in implicit neural representations (INRs), this paper proposes AIRe, an adaptive training framework. AIRe jointly optimizes network architecture and input frequency during training: it performs dynamic structured pruning based on neuron contribution, while simultaneously densifying input frequencies guided by spectral analysis—thereby co-regulating representational capacity and model complexity. Its key innovation lies in the first deep coupling of frequency-aware input modulation with parameter sparsification, enabling end-to-end adaptive architectural evolution. Evaluated on image and signed distance function (SDF) reconstruction tasks, AIRe achieves up to 62% reduction in model parameters while maintaining or surpassing state-of-the-art methods in reconstruction fidelity, as measured by PSNR and Chamfer distance.

Technology Category

Application Category

📝 Abstract
Encoding input coordinates with sinusoidal functions into multilayer perceptrons (MLPs) has proven effective for implicit neural representations (INRs) of low-dimensional signals, enabling the modeling of high-frequency details. However, selecting appropriate input frequencies and architectures while managing parameter redundancy remains an open challenge, often addressed through heuristics and heavy hyperparameter optimization schemes. In this paper, we introduce AIRe ($ extbf{A}$daptive $ extbf{I}$mplicit neural $ extbf{Re}$presentation), an adaptive training scheme that refines the INR architecture over the course of optimization. Our method uses a neuron pruning mechanism to avoid redundancy and input frequency densification to improve representation capacity, leading to an improved trade-off between network size and reconstruction quality. For pruning, we first identify less-contributory neurons and apply a targeted weight decay to transfer their information to the remaining neurons, followed by structured pruning. Next, the densification stage adds input frequencies to spectrum regions where the signal underfits, expanding the representational basis. Through experiments on images and SDFs, we show that AIRe reduces model size while preserving, or even improving, reconstruction quality. Code and pretrained models will be released for public use.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal input frequencies and architectures for implicit neural representations
Managing parameter redundancy in multilayer perceptron networks
Improving trade-off between network size and reconstruction quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive training refines architecture during optimization
Neuron pruning reduces redundancy via targeted weight decay
Input frequency densification expands representational basis
🔎 Similar Papers
No similar papers found.
D
Diana Aldana
IMPA
J
João Paulo Lima
IMPA, Universidade Federal Rural de Pernambuco
Daniel Csillag
Daniel Csillag
FGV EMAp
Machine LearningStatistics
D
Daniel Perazzo
IMPA
H
Haoan Feng
University of Maryland
Luiz Velho
Luiz Velho
IMPA
GraphicsVision
T
Tiago Novello
IMPA