π€ AI Summary
Existing blind quality enhancement for compressed video (QECV) methods rely on global degradation representations, lacking spatial detail modeling and failing to adapt computational complexity to varying compression levels. This paper proposes a spatially adaptive and computationally scalable blind QECV framework. First, a pre-trained multi-scale degradation representation module is designed to enable fine-grained, spatially aware artifact modeling. Second, a hierarchical dynamic termination mechanism is introduced to adaptively adjust denoising depth according to compression severity. To our knowledge, this is the first blind QECV method achieving pixel-level degradation-aware enhancement while balancing accuracy and efficiency. Experiments demonstrate that, at QP=22, the proposed method achieves a 0.34 dB PSNR gain over state-of-the-art blind approaches (a relative improvement of +110%); under identical conditions, inference latency is reduced by 50% compared to the QP=42 scenario.
π Abstract
Existing studies on Quality Enhancement for Compressed Video (QECV) predominantly rely on known Quantization Parameters (QPs), employing distinct enhancement models per QP setting, termed non-blind methods. However, in real-world scenarios involving transcoding or transmission, QPs may be partially or entirely unknown, limiting the applicability of such approaches and motivating the development of blind QECV techniques. Current blind methods generate degradation vectors via classification models with cross-entropy loss, using them as channel attention to guide artifact removal. However, these vectors capture only global degradation information and lack spatial details, hindering adaptation to varying artifact patterns at different spatial positions. To address these limitations, we propose a pretrained Degradation Representation Learning (DRL) module that decouples and extracts high-dimensional, multiscale degradation representations from video content to guide the artifact removal. Additionally, both blind and non-blind methods typically employ uniform architectures across QPs, hence, overlooking the varying computational demands inherent to different compression levels. We thus introduce a hierarchical termination mechanism that dynamically adjusts the number of artifact reduction stages based on the compression level. Experimental results demonstrate that the proposed approach significantly enhances performance, achieving a PSNR improvement of 110% (from 0.31 dB to 0.65 dB) over a competing state-of-the-art blind method at QP = 22. Furthermore, the proposed hierarchical termination mechanism reduces the average inference time at QP = 22 by half compared to QP = 42.