🤖 AI Summary
Existing implicit neural representation (INR) compression methods rely on learnable encoders/decoders, quantization, and entropy coding—leading to poor generalizability and deployment complexity. Method: This paper proposes a parameter-free, plug-and-play weight quantization-space sparse coding framework. We observe that INR weights intrinsically reside in a fixed, high-dimensional subspace spanned by a pre-defined sparse dictionary—requiring neither training nor transmission. Our approach performs only sparse coding of weights and stores the resulting coefficients, eliminating quantization, entropy coding, and learnable components entirely. Contribution/Results: Evaluated across diverse INR tasks—including image reconstruction, occupancy fields, and neural radiance fields (NeRF)—our method achieves substantial storage reduction (1.8–3.2× higher compression ratios on average) while consistently outperforming state-of-the-art baselines in reconstruction fidelity. It offers superior efficiency, task-agnostic applicability, and practical deployability.
📝 Abstract
Implicit Neural Representations (INRs) are increasingly recognized as a versatile data modality for representing discretized signals, offering benefits such as infinite query resolution and reduced storage requirements. Existing signal compression approaches for INRs typically employ one of two strategies: 1. direct quantization with entropy coding of the trained INR; 2. deriving a latent code on top of the INR through a learnable transformation. Thus, their performance is heavily dependent on the quantization and entropy coding schemes employed. In this paper, we introduce SINR, an innovative compression algorithm that leverages the patterns in the vector spaces formed by weights of INRs. We compress these vector spaces using a high-dimensional sparse code within a dictionary. Further analysis reveals that the atoms of the dictionary used to generate the sparse code do not need to be learned or transmitted to successfully recover the INR weights. We demonstrate that the proposed approach can be integrated with any existing INR-based signal compression technique. Our results indicate that SINR achieves substantial reductions in storage requirements for INRs across various configurations, outperforming conventional INR-based compression baselines. Furthermore, SINR maintains high-quality decoding across diverse data modalities, including images, occupancy fields, and Neural Radiance Fields.