PMQ-VE: Progressive Multi-Frame Quantization for Video Enhancement

πŸ“… 2025-05-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing video enhancement model quantization methods suffer from rigid inter-frame representation allocation and over-reliance on full-precision teachers, leading to significant performance degradation and loss of fine details. To address these issues, we propose a two-stage quantization framework: (1) Backtracking-based Multi-Frame Quantization (BMFQ), which adaptively determines coarse-to-fine-grained channel-wise pruning boundaries via percentile-based initialization, iterative pruning, and backtracking search; and (2) Progressive Multi-Teacher Distillation (PMTD), which jointly leverages full-precision and high-bit (e.g., INT8) teachers to guide low-bit (INT4/INT6) student models. Evaluated on super-resolution, denoising, and frame interpolation tasks, our method achieves state-of-the-art performance with near-lossless PSNR (within 0.05 dB), 73% reduction in computational cost, and 68% lower memory footprint compared to full-precision counterparts.

Technology Category

Application Category

πŸ“ Abstract
Multi-frame video enhancement tasks aim to improve the spatial and temporal resolution and quality of video sequences by leveraging temporal information from multiple frames, which are widely used in streaming video processing, surveillance, and generation. Although numerous Transformer-based enhancement methods have achieved impressive performance, their computational and memory demands hinder deployment on edge devices. Quantization offers a practical solution by reducing the bit-width of weights and activations to improve efficiency. However, directly applying existing quantization methods to video enhancement tasks often leads to significant performance degradation and loss of fine details. This stems from two limitations: (a) inability to allocate varying representational capacity across frames, which results in suboptimal dynamic range adaptation; (b) over-reliance on full-precision teachers, which limits the learning of low-bit student models. To tackle these challenges, we propose a novel quantization method for video enhancement: Progressive Multi-Frame Quantization for Video Enhancement (PMQ-VE). This framework features a coarse-to-fine two-stage process: Backtracking-based Multi-Frame Quantization (BMFQ) and Progressive Multi-Teacher Distillation (PMTD). BMFQ utilizes a percentile-based initialization and iterative search with pruning and backtracking for robust clipping bounds. PMTD employs a progressive distillation strategy with both full-precision and multiple high-bit (INT) teachers to enhance low-bit models' capacity and quality. Extensive experiments demonstrate that our method outperforms existing approaches, achieving state-of-the-art performance across multiple tasks and benchmarks.The code will be made publicly available at: https://github.com/xiaoBIGfeng/PMQ-VE.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational and memory demands for video enhancement on edge devices
Addressing performance degradation in quantized video enhancement methods
Improving dynamic range adaptation and learning in low-bit models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Multi-Frame Quantization for efficiency
Backtracking-based Multi-Frame Quantization for robustness
Progressive Multi-Teacher Distillation for quality
πŸ”Ž Similar Papers
No similar papers found.
Z
ZhanFeng Feng
University of Science and Technology of China
Long Peng
Long Peng
China Electric Power Research Institute
LCC-HVDC and VSC-HVDC Transmission Technologies
Xin Di
Xin Di
University of Science and Technology of China
Computer visionLow level visionSuper-resolutionComputer vision\Low level vision\Super-resolution
Y
Yong Guo
Huawei Technologies Co., Ltd.
Wenbo Li
Wenbo Li
The Chinese University of Hong Kong
Computer VisionDeep Learning
Y
Yulun Zhang
Shanghai Jiao Tong University
R
Renjing Pei
Huawei Technologies Co., Ltd.
Y
Yang Wang
University of Science and Technology of China, Chang’an University
Y
Yang Cao
University of Science and Technology of China
Z
Zheng-Jun Zha
University of Science and Technology of China