🤖 AI Summary
To address the challenge of simultaneously achieving high-rate distortion performance and low computational complexity in image compression, this paper proposes the first end-to-end learnable compression framework integrating convolutional neural networks (CNNs) and state space models (SSMs). Our method introduces: (1) a content-adaptive SSM module that dynamically captures both long-range semantic dependencies and local details; and (2) a context-aware entropy coding module that jointly models spatial and channel-wise redundancies. The architecture unifies CNNs, SSMs, learnable quantization, nonlinear transformations, and autoregressive entropy modeling. Evaluated on standard benchmarks including CLIC and Kodak, our approach achieves state-of-the-art rate-distortion performance. It reduces model parameters by 37%, FLOPs by 52%, and inference latency by 48% compared to prior methods—outperforming both lightweight and mainstream models. This demonstrates a superior trade-off between compression efficiency and reconstruction quality.
📝 Abstract
Learned Image Compression (LIC) has explored various architectures, such as Convolutional Neural Networks (CNNs) and transformers, in modeling image content distributions in order to achieve compression effectiveness. However, achieving high rate-distortion performance while maintaining low computational complexity (ie, parameters, FLOPs, and latency) remains challenging. In this paper, we propose a hybrid Convolution and State Space Models (SSMs) based image compression framework, termed extit{CMamba}, to achieve superior rate-distortion performance with low computational complexity. Specifically, CMamba introduces two key components: a Content-Adaptive SSM (CA-SSM) module and a Context-Aware Entropy (CAE) module. First, we observed that SSMs excel in modeling overall content but tend to lose high-frequency details. In contrast, CNNs are proficient at capturing local details. Motivated by this, we propose the CA-SSM module that can dynamically fuse global content extracted by SSM blocks and local details captured by CNN blocks in both encoding and decoding stages. As a result, important image content is well preserved during compression. Second, our proposed CAE module is designed to reduce spatial and channel redundancies in latent representations after encoding. Specifically, our CAE leverages SSMs to parameterize the spatial content in latent representations. Benefiting from SSMs, CAE significantly improves spatial compression efficiency while reducing spatial content redundancies. Moreover, along the channel dimension, CAE reduces inter-channel redundancies of latent representations via an autoregressive manner, which can fully exploit prior knowledge from previous channels without sacrificing efficiency. Experimental results demonstrate that CMamba achieves superior rate-distortion performance.