Neural Video Compression with In-Loop Contextual Filtering and Out-of-Loop Reconstruction Enhancement

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural video compression suffers from error propagation across frames during inter-frame coding, degrading reconstruction quality and limiting coding efficiency. To address this, this paper systematically investigates the role of enhancement filtering within conditional modeling frameworks and proposes a novel “in-loop contextual filtering + out-of-loop reconstruction enhancement” co-optimization paradigm. Its core contribution is an adaptive in-loop filtering decision mechanism that dynamically fuses temporal context to suppress error accumulation, jointly optimized with an out-of-loop enhancement module to improve reconstruction fidelity. The method integrates conditional coding, neural network-based modeling, and dynamic filtering strategies. Experimental results demonstrate that, at equivalent reconstruction quality, the proposed approach achieves an average bitrate reduction of 7.71% over state-of-the-art neural video codecs, significantly improving both coding efficiency and visual quality.

Technology Category

Application Category

📝 Abstract
This paper explores the application of enhancement filtering techniques in neural video compression. Specifically, we categorize these techniques into in-loop contextual filtering and out-of-loop reconstruction enhancement based on whether the enhanced representation affects the subsequent coding loop. In-loop contextual filtering refines the temporal context by mitigating error propagation during frame-by-frame encoding. However, its influence on both the current and subsequent frames poses challenges in adaptively applying filtering throughout the sequence. To address this, we introduce an adaptive coding decision strategy that dynamically determines filtering application during encoding. Additionally, out-of-loop reconstruction enhancement is employed to refine the quality of reconstructed frames, providing a simple yet effective improvement in coding efficiency. To the best of our knowledge, this work presents the first systematic study of enhancement filtering in the context of conditional-based neural video compression. Extensive experiments demonstrate a 7.71% reduction in bit rate compared to state-of-the-art neural video codecs, validating the effectiveness of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Enhancing neural video compression with filtering techniques
Mitigating error propagation in temporal frame encoding
Improving coding efficiency via adaptive reconstruction enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-loop contextual filtering reduces error propagation
Adaptive coding strategy dynamically applies filtering
Out-of-loop enhancement improves reconstructed frame quality
🔎 Similar Papers
No similar papers found.
Yaojun Wu
Yaojun Wu
Bytedance Inc.
Data compression、Deep learning
C
Chaoyi Lin
Bytedance China
Y
Yiming Wang
Hohai University
S
Semih Esenlik
Bytedance Inc.
Zhaobin Zhang
Zhaobin Zhang
Bytedance Inc.
K
Kai Zhang
Bytedance Inc.
L
Li Zhang
Bytedance Inc.