🤖 AI Summary
This work investigates the impact of input/output coordinate-space kernel transformations on the performance of implicit neural representations (INRs), proposing a zero-parameter, low-overhead scale-translation pre-transformation strategy—without altering the network architecture. The method systematically reveals, for the first time, the critical role of coordinate-space transformations in governing INR depth-wise signal propagation and normalization dynamics, and theoretically elucidates their mechanism for enhancing signal reconstruction fidelity. In image reconstruction tasks, the strategy consistently improves PSNR and SSIM across diverse INR architectures and datasets, with negligible computational overhead. It demonstrates strong generalization and robustness, requiring no architectural modifications or additional trainable parameters. This study establishes a lightweight, model-agnostic optimization paradigm for INRs, offering a principled, efficient approach to boost reconstruction quality through geometric pre-conditioning of coordinate inputs.
📝 Abstract
Implicit neural representations (INRs), which leverage neural networks to represent signals by mapping coordinates to their corresponding attributes, have garnered significant attention. They are extensively utilized for image representation, with pixel coordinates as input and pixel values as output. In contrast to prior works focusing on investigating the effect of the model's inside components (activation function, for instance), this work pioneers the exploration of the effect of kernel transformation of input/output while keeping the model itself unchanged. A byproduct of our findings is a simple yet effective method that combines scale and shift to significantly boost INR with negligible computation overhead. Moreover, we present two perspectives, depth and normalization, to interpret the performance benefits caused by scale and shift transformation. Overall, our work provides a new avenue for future works to understand and improve INR through the lens of kernel transformation.