Exploring Kernel Transformations for Implicit Neural Representations

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the impact of input/output coordinate-space kernel transformations on the performance of implicit neural representations (INRs), proposing a zero-parameter, low-overhead scale-translation pre-transformation strategy—without altering the network architecture. The method systematically reveals, for the first time, the critical role of coordinate-space transformations in governing INR depth-wise signal propagation and normalization dynamics, and theoretically elucidates their mechanism for enhancing signal reconstruction fidelity. In image reconstruction tasks, the strategy consistently improves PSNR and SSIM across diverse INR architectures and datasets, with negligible computational overhead. It demonstrates strong generalization and robustness, requiring no architectural modifications or additional trainable parameters. This study establishes a lightweight, model-agnostic optimization paradigm for INRs, offering a principled, efficient approach to boost reconstruction quality through geometric pre-conditioning of coordinate inputs.

Technology Category

Application Category

📝 Abstract
Implicit neural representations (INRs), which leverage neural networks to represent signals by mapping coordinates to their corresponding attributes, have garnered significant attention. They are extensively utilized for image representation, with pixel coordinates as input and pixel values as output. In contrast to prior works focusing on investigating the effect of the model's inside components (activation function, for instance), this work pioneers the exploration of the effect of kernel transformation of input/output while keeping the model itself unchanged. A byproduct of our findings is a simple yet effective method that combines scale and shift to significantly boost INR with negligible computation overhead. Moreover, we present two perspectives, depth and normalization, to interpret the performance benefits caused by scale and shift transformation. Overall, our work provides a new avenue for future works to understand and improve INR through the lens of kernel transformation.
Problem

Research questions and friction points this paper is trying to address.

Explores kernel transformation effects on implicit neural representations
Proposes scale-shift method to enhance INR performance efficiently
Analyzes performance benefits via depth and normalization perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores kernel transformation for implicit neural representations
Combines scale and shift to boost INR efficiently
Provides depth and normalization perspectives for transformation benefits
🔎 Similar Papers
No similar papers found.
Sheng Zheng
Sheng Zheng
Beijing Institute of Technology
Computer vision
Chaoning Zhang
Chaoning Zhang
Professor at UESTC (电子科技大学, China)
Computer VisionLLM and VLMGenAI and AIGC Detection
D
Dongshen Han
School of Computing, Kyung Hee University, Yongin-si, Korea
F
Fachrina Dewi Puspitasari
School of Computing, Kyung Hee University, Yongin-si, Korea
X
Xinhong Hao
School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
Y
Yang Yang
Center for Future Media and the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China, and also with the Institute of Electronic and Information Engineering, University of Electronic Science and Technology of China, Guangdong, China
H
Heng Tao Shen
Center for Future Multimedia and the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China, and also with the Peng Cheng Laboratory, Shenzhen, China