SRNeRV: A Scale-wise Recursive Framework for Neural Video Representation

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the parameter redundancy and inefficiency in existing implicit neural representation (INR)-based multi-scale video generation methods, which typically model each scale independently. To overcome this limitation, we propose a scale-recurrent sharing architecture that decouples spatial and channel mixing modules. By recursively reusing the channel mixing module across scales while preserving scale-specific spatial modeling capabilities, our approach enables effective cross-scale parameter sharing. This design significantly reduces model parameters and improves both rate-distortion performance and compression efficiency. Experimental results demonstrate that, across multiple INR-friendly scenarios, the proposed method consistently outperforms current state-of-the-art techniques in terms of reconstruction quality and computational efficiency.

Technology Category

Application Category

📝 Abstract
Implicit Neural Representations (INRs) have emerged as a promising paradigm for video representation and compression. However, existing multi-scale INR generators often suffer from significant parameter redundancy by stacking independent processing blocks for each scale. Inspired by the principle of scale self-similarity in the generation process, we propose SRNeRV, a novel scale-wise recursive framework that replaces this stacked design with a parameter-efficient shared architecture. The core of our approach is a hybrid sharing scheme derived from decoupling the processing block into a scale-specific spatial mixing module and a scale-invariant channel mixing module. We recursively apply the same shared channel mixing module, which contains the majority of the parameters, across all scales, significantly reducing the model size while preserving the crucial capacity to learn scale-specific spatial patterns. Extensive experiments demonstrate that SRNeRV achieves a significant rate-distortion performance boost, especially in INR-friendly scenarios, validating that our sharing scheme successfully amplifies the core strengths of the INR paradigm.
Problem

Research questions and friction points this paper is trying to address.

Implicit Neural Representations
video representation
parameter redundancy
multi-scale
neural video compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit Neural Representations
Scale-wise Recursion
Parameter Sharing
Multi-scale Video Compression
Neural Video Representation
🔎 Similar Papers