HiDE: Hierarchical Dictionary-Based Entropy Modeling for Learned Image Compression

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing learned image compression methods, which struggle to effectively leverage external priors from large-scale data and are constrained by single-layer dictionary structures that hinder representational capacity. To overcome these challenges, we propose HiDE, a novel framework featuring a hierarchical dictionary architecture that separately models global structural and local textural external priors. HiDE integrates a cascaded retrieval mechanism with a context-aware, multi-receptive-field parallel parameter estimation network, substantially enhancing the accuracy of conditional probability estimation and the expressiveness of the entropy model. Experimental results demonstrate that HiDE achieves significant BD-rate savings of 18.5%, 21.99%, and 24.01% on the Kodak, CLIC, and Tecnick datasets, respectively, markedly outperforming the VTM-12.1 anchor.

Technology Category

Application Category

📝 Abstract
Learned image compression (LIC) has achieved remarkable coding efficiency, where entropy modeling plays a pivotal role in minimizing bitrate through informative priors. Existing methods predominantly exploit internal contexts within the input image, yet the rich external priors embedded in large-scale training data remain largely underutilized. Recent advances in dictionary-based entropy models have demonstrated that incorporating external priors can substantially enhance compression performance. However, current approaches organize heterogeneous external priors within a single-level dictionary, resulting in imbalanced utilization and limited representational capacity. Moreover, effective entropy modeling requires not only expressive priors but also a parameter estimation network capable of interpreting them. To address these challenges, we propose HiDE, a Hierarchical Dictionary-based Entropy modeling framework for learned image compression. HiDE decomposes external priors into global structural and local detail dictionaries with cascaded retrieval, enabling structured and efficient utilization of external information. Moreover, a context-aware parameter estimator with parallel multi-receptive-field design is introduced to adaptively exploit heterogeneous contexts for accurate conditional probability estimation. Experimental results show that HiDE achieves 18.5%, 21.99%, and 24.01% BD-rate savings over VTM-12.1 on the Kodak, CLIC, and Tecnick datasets, respectively.
Problem

Research questions and friction points this paper is trying to address.

learned image compression
entropy modeling
external priors
dictionary-based
hierarchical representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Dictionary
External Priors
Entropy Modeling
Learned Image Compression
Context-aware Parameter Estimation
🔎 Similar Papers
No similar papers found.