MRT: Learning Compact Representations with Mixed RWKV-Transformer for Extreme Image Compression

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing extreme image compression methods predominantly employ two-dimensional latent representations, which suffer from high spatial redundancy and limited compression efficiency. To address this, we propose a hierarchical one-dimensional representation learning framework built upon a hybrid RWKV-Transformer architecture: RWKV models long-range dependencies, while Transformer captures local structural redundancies; linear attention is incorporated to enhance computational efficiency, and a custom decoder is designed specifically for 1D latent features. This approach substantially reduces latent-space redundancy. On the Kodak and CLIC2020 benchmarks, it achieves bitrate reductions of 43.75% and 30.59%, respectively, at ultra-low bitrates (<0.02 bpp), outperforming state-of-the-art methods such as GLC. Our work establishes an efficient, compact, and effective 1D generative paradigm for extreme image compression.

Technology Category

Application Category

📝 Abstract
Recent advances in extreme image compression have revealed that mapping pixel data into highly compact latent representations can significantly improve coding efficiency. However, most existing methods compress images into 2-D latent spaces via convolutional neural networks (CNNs) or Swin Transformers, which tend to retain substantial spatial redundancy, thereby limiting overall compression performance. In this paper, we propose a novel Mixed RWKV-Transformer (MRT) architecture that encodes images into more compact 1-D latent representations by synergistically integrating the complementary strengths of linear-attention-based RWKV and self-attention-based Transformer models. Specifically, MRT partitions each image into fixed-size windows, utilizing RWKV modules to capture global dependencies across windows and Transformer blocks to model local redundancies within each window. The hierarchical attention mechanism enables more efficient and compact representation learning in the 1-D domain. To further enhance compression efficiency, we introduce a dedicated RWKV Compression Model (RCM) tailored to the structure characteristics of the intermediate 1-D latent features in MRT. Extensive experiments on standard image compression benchmarks validate the effectiveness of our approach. The proposed MRT framework consistently achieves superior reconstruction quality at bitrates below 0.02 bits per pixel (bpp). Quantitative results based on the DISTS metric show that MRT significantly outperforms the state-of-the-art 2-D architecture GLC, achieving bitrate savings of 43.75%, 30.59% on the Kodak and CLIC2020 test datasets, respectively.
Problem

Research questions and friction points this paper is trying to address.

Developing compact 1-D latent representations for extreme image compression
Reducing spatial redundancy in existing 2-D compression methods
Integrating RWKV and Transformer models for efficient representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Mixed RWKV-Transformer for 1-D latent encoding
Combines global RWKV and local Transformer attention
Introduces RWKV Compression Model for latent features
🔎 Similar Papers
No similar papers found.