๐ค AI Summary
Existing extreme image compression methods overly rely on semantic clustering at ultra-low bitrates (<0.01 bpp), leading to severe loss of fine-grained details and degraded reconstruction fidelity. To address this, we propose a dual generative latent-space fusion framework, introducing the first semanticโdetail disentangled two-branch compression paradigm: a semantic branch performs clustering-based tokenization, while a detail branch enables perception-sensitive encoding; cross-branch interaction distillation suppresses redundancy and enforces consistency. The framework constructs dual-path latent spaces via a generative tokenizer, jointly optimizing for high-fidelity reconstruction and generative authenticity. On the CLIC2020 dataset, our method reduces LPIPS by 27.93% and DISTS by 53.55% relative to MS-ILLM, and achieves markedly superior visual quality compared to state-of-the-art diffusion-based codecs.
๐ Abstract
Recent studies in extreme image compression have achieved remarkable performance by compressing the tokens from generative tokenizers. However, these methods often prioritize clustering common semantics within the dataset, while overlooking the diverse details of individual objects. Consequently, this results in suboptimal reconstruction fidelity, especially at low bitrates. To address this issue, we introduce a Dual-generative Latent Fusion (DLF) paradigm. DLF decomposes the latent into semantic and detail elements, compressing them through two distinct branches. The semantic branch clusters high-level information into compact tokens, while the detail branch encodes perceptually critical details to enhance the overall fidelity. Additionally, we propose a cross-branch interactive design to reduce redundancy between the two branches, thereby minimizing the overall bit cost. Experimental results demonstrate the impressive reconstruction quality of DLF even below 0.01 bits per pixel (bpp). On the CLIC2020 test set, our method achieves bitrate savings of up to 27.93% on LPIPS and 53.55% on DISTS compared to MS-ILLM. Furthermore, DLF surpasses recent diffusion-based codecs in visual fidelity while maintaining a comparable level of generative realism. Code will be available later.