DeepHQ: Learned Hierarchical Quantizer for Progressive Deep Image Coding

📅 2024-08-22
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
Existing progressive image coding (PIC) methods rely on handcrafted quantization hierarchies, limiting compression efficiency and adaptability. This paper proposes an end-to-end learnable hierarchical quantization framework—the first to jointly optimize quantization step sizes across all layers—and introduces a selective feature masking mechanism to dynamically preserve salient representations. The method integrates differentiable quantization, hierarchical latent space modeling, and rate-distortion joint optimization, enabling single-bitstream decoding of multi-quality images. Evaluated on multiple benchmarks, it significantly outperforms state-of-the-art progressive methods: achieving a 15.3% BD-rate reduction, 37% lower decoding latency, and 29% fewer model parameters—thereby breaking the performance ceiling imposed by manual quantization design.

Technology Category

Application Category

📝 Abstract
Unlike fixed- or variable-rate image coding, progressive image coding (PIC) aims to compress various qualities of images into a single bitstream, increasing the versatility of bitstream utilization and providing high compression efficiency compared to simulcast compression. Research on neural network (NN)-based PIC is in its early stages, mainly focusing on applying varying quantization step sizes to the transformed latent representations in a hierarchical manner. These approaches are designed to compress only the progressively added information as the quality improves, considering that a wider quantization interval for lower-quality compression includes multiple narrower sub-intervals for higher-quality compression. However, the existing methods are based on handcrafted quantization hierarchies, resulting in sub-optimal compression efficiency. In this paper, we propose an NN-based progressive coding method that firstly utilizes learned quantization step sizes via learning for each quantization layer. We also incorporate selective compression with which only the essential representation components are compressed for each quantization layer. We demonstrate that our method achieves significantly higher coding efficiency than the existing approaches with decreased decoding time and reduced model size.
Problem

Research questions and friction points this paper is trying to address.

Learns hierarchical quantization for progressive image compression
Improves compression efficiency over handcrafted quantization methods
Selectively compresses essential components in each quantization layer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned hierarchical quantization via neural networks
Selective compression of essential representation components
Improved coding efficiency with reduced decoding time
🔎 Similar Papers
No similar papers found.
J
Jooyoung Lee
School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-731, Republic of Korea
S
S. Jeong
Media Research Division, Electronics and Telecommunications Research Institute, Daejeon, 34129, Republic of Korea
Munchurl Kim
Munchurl Kim
Professor, School of Electrical Engineering, Korea Advanced Institute of Science and Technology
Deep Learning for Image Restoration and Qualty EnhancementDeep Video CompressionImage Analysis and UnderstandingPattern Re