Efficient Neural Video Representation with Temporally Coherent Modulation

📅 2025-05-01
🏛️ European Conference on Computer Vision
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of grid-based encoding in implicit neural video representations (INRs)—namely, its neglect of video dynamics, resulting in parameter redundancy and low coding efficiency—this paper proposes a temporally consistent modulation framework. It decomposes 3D video into spatiotemporally decoupled 2D grid groups guided by optical flow and introduces a learnable, parameter-sharing modulation network to enable dynamic feature modeling and efficient parameter reuse. This work pioneers a spatiotemporally decoupled, temporally coherent modulation mechanism. The method achieves high-speed encoding (over 3× faster than NeRV), significantly improves parameter efficiency (10% fewer parameters), and enhances reconstruction quality (PSNR gains of +1.54 dB on UVG and +1.84 dB on MCL-JCV; LPIPS reductions of −0.019 and −0.013, respectively). Its compression performance is competitive with H.264/HEVC.

Technology Category

Application Category

📝 Abstract
Implicit neural representations (INR) has found successful applications across diverse domains. To employ INR in real-life, it is important to speed up training. In the field of INR for video applications, the state-of-the-art approach employs grid-type parametric encoding and successfully achieves a faster encoding speed in comparison to its predecessors. However, the grid usage, which does not consider the video's dynamic nature, leads to redundant use of trainable parameters. As a result, it has significantly lower parameter efficiency and higher bitrate compared to NeRV-style methods that do not use a parametric encoding. To address the problem, we propose Neural Video representation with Temporally coherent Modulation (NVTM), a novel framework that can capture dynamic characteristics of video. By decomposing the spatio-temporal 3D video data into a set of 2D grids with flow information, NVTM enables learning video representation rapidly and uses parameter efficiently. Our framework enables to process temporally corresponding pixels at once, resulting in the fastest encoding speed for a reasonable video quality, especially when compared to the NeRV-style method, with a speed increase of over 3 times. Also, it remarks an average of 1.54dB/0.019 improvements in PSNR/LPIPS on UVG (Dynamic) (even with 10% fewer parameters) and an average of 1.84dB/0.013 improvements in PSNR/LPIPS on MCL-JCV (Dynamic), compared to previous grid-type works. By expanding this to compression tasks, we demonstrate comparable performance to video compression standards (H.264, HEVC) and recent INR approaches for video compression. Additionally, we perform extensive experiments demonstrating the superior performance of our algorithm across diverse tasks, encompassing super resolution, frame interpolation and video inpainting. Project page is https://sujiikim.github.io/NVTM/.
Problem

Research questions and friction points this paper is trying to address.

Improving parameter efficiency in neural video representation
Enhancing encoding speed for dynamic video characteristics
Achieving better video quality with fewer parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Temporally coherent modulation for dynamic video
Decomposing 3D video into 2D grids with flow
Fast encoding with efficient parameter usage
🔎 Similar Papers
No similar papers found.
S
Seungjun Shin
Samsung Advanced Institute of Technology
Suji Kim
Suji Kim
Kookmin University
D
Dokwan Oh
Samsung Advanced Institute of Technology