🤖 AI Summary
To address the significant degradation in reconstruction performance caused by low-bit quantization in video snapshot compressive imaging (SCI), this paper proposes Q-SCI, an end-to-end lightweight and efficient quantization framework. Our key contributions are: (1) a high-fidelity feature extraction and reconstruction module; (2) a query/key distribution shifting operation to mitigate attention distortion in Transformers under low-bit quantization; and (3) seamless integration of quantization-aware training (QAT), low-bit network quantization, and adaptation to efficient SCI architectures (e.g., EfficientSCI-S). Evaluated on simulated datasets, the 4-bit Q-SCI model achieves a theoretical 7.8× speedup with only a 2.3% PSNR drop compared to its full-precision counterpart, attaining state-of-the-art performance. To the best of our knowledge, Q-SCI is the first framework to enable both high-accuracy and high-efficiency deep quantized reconstruction in the SCI domain.
📝 Abstract
Video Snapshot Compressive Imaging (SCI) aims to use a low-speed 2D camera to capture high-speed scene as snapshot compressed measurements, followed by a reconstruction algorithm to reconstruct the high-speed video frames. State-of-the-art (SOTA) deep learning-based algorithms have achieved impressive performance, yet with heavy computational workload. Network quantization is a promising way to reduce computational cost. However, a direct low-bit quantization will bring large performance drop. To address this challenge, in this paper, we propose a simple low-bit quantization framework (dubbed Q-SCI) for the end-to-end deep learning-based video SCI reconstruction methods which usually consist of a feature extraction, feature enhancement, and video reconstruction module. Specifically, we first design a high-quality feature extraction module and a precise video reconstruction module to extract and propagate high-quality features in the low-bit quantized model. In addition, to alleviate the information distortion of the Transformer branch in the quantized feature enhancement module, we introduce a shift operation on the query and key distributions to further bridge the performance gap. Comprehensive experimental results manifest that our Q-SCI framework can achieve superior performance, e.g., 4-bit quantized EfficientSCI-S derived by our Q-SCI framework can theoretically accelerate the real-valued EfficientSCI-S by 7.8X with only 2.3% performance gap on the simulation testing datasets. Code is available at https://github.com/mcao92/QuantizedSCI.