Prediction and Reference Quality Adaptation for Learned Video Compression

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address error propagation caused by dynamic mismatch between temporal prediction quality and reference frame quality in learned video compression, this paper proposes a dual adaptive mechanism: Prediction Quality Adaptation (PQA) and Reference Quality Adaptation (RQA). PQA introduces a confidence-driven spatial-channel joint prediction selection scheme for fine-grained suppression or enhancement of predictions. RQA employs spatially variant dynamic filtering, adaptively modulating convolutional kernels based on local quality variations in reference frames. The method integrates deep confidence modeling, spatial-channel attention, and a recurrent long-cycle training strategy. Evaluated on standard benchmarks including UVG and MCL-JCV, the approach achieves an average BD-rate reduction of 8.2%, outperforming state-of-the-art learned video codecs. It is the first to realize quality-aware spatiotemporal collaborative error suppression.

Technology Category

Application Category

📝 Abstract
Temporal prediction is one of the most important technologies for video compression. Various prediction coding modes are designed in traditional video codecs. Traditional video codecs will adaptively to decide the optimal coding mode according to the prediction quality and reference quality. Recently, learned video codecs have made great progress. However, they did not effectively address the problem of prediction and reference quality adaptation, which limits the effective utilization of temporal prediction and reduction of reconstruction error propagation. Therefore, in this paper, we first propose a confidence-based prediction quality adaptation (PQA) module to provide explicit discrimination for the spatial and channel-wise prediction quality difference. With this module, the prediction with low quality will be suppressed and that with high quality will be enhanced. The codec can adaptively decide which spatial or channel location of predictions to use. Then, we further propose a reference quality adaptation (RQA) module and an associated repeat-long training strategy to provide dynamic spatially variant filters for diverse reference qualities. With these filters, our codec can adapt to different reference qualities, making it easier to achieve the target reconstruction quality and reduce the reconstruction error propagation. Experimental results verify that our proposed modules can effectively help our codec achieve a higher compression performance.
Problem

Research questions and friction points this paper is trying to address.

Video Compression
Quality Variation
Temporal Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intelligent Video Compression
Prediction Quality Assessment (PQA)
Reference Quality Assessment (RQA)
🔎 Similar Papers
No similar papers found.
Xihua Sheng
Xihua Sheng
University of Science and Technology of China->City University of Hong Kong
Video codingImage codingPoint Cloud coding
L
Li Li
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China, Hefei 230027, China
D
Dong Liu
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China, Hefei 230027, China
Houqiang Li
Houqiang Li
Professor, Department of Electric Engineering and Information Science, University of Science and
Multimedia SearchImage/Video AnalysisImage/Video Coding