🤖 AI Summary
Existing learned image compression (LIC) methods face trade-offs: Transformers and state-space models (SSMs) excel at long-range modeling but often lose structural fidelity and neglect critical frequency characteristics, while CNNs lack global contextual awareness. To address this, we propose HCFSSNet—a hybrid architecture integrating convolutional priors with frequency-domain state-space modeling. Specifically, we design the Vectorized Frequency-State Space (VFSS) block, which combines omnidirectional neighborhood scanning with adaptive frequency-domain modulation, and introduce a frequency-aware Swin Transformer module to enhance frequency-selective attention and bit-allocation efficiency. The architecture jointly captures local high-frequency details and global low-frequency structures. Experimental results show that HCFSSNet achieves BD-rate reductions of 18.06%, 24.56%, and 22.44% over VTM on Kodak, Tecnick, and CLIC benchmarks, respectively—matching MambaIC’s performance with significantly fewer parameters—and substantially advances rate-distortion performance in learned image compression.
📝 Abstract
Learned image compression (LIC) has recently benefited from Transformer based and state space model (SSM) based architectures. Convolutional neural networks (CNNs) effectively capture local high frequency details, whereas Transformers and SSMs provide strong long range modeling capabilities but may cause structural information loss or ignore frequency characteristics that are crucial for compression. In this work we propose HCFSSNet, a Hybrid Convolution and Frequency State Space Network for LIC. HCFSSNet uses CNNs to extract local high frequency structures and introduces a Vision Frequency State Space (VFSS) block that models long range low frequency information. The VFSS block combines an Omni directional Neighborhood State Space (VONSS) module, which scans features horizontally, vertically and diagonally, with an Adaptive Frequency Modulation Module (AFMM) that applies content adaptive weighting of discrete cosine transform frequency components for more efficient bit allocation. To further reduce redundancy in the entropy model, we integrate AFMM with a Swin Transformer to form a Frequency Swin Transformer Attention Module (FSTAM) for frequency aware side information modeling. Experiments on the Kodak, Tecnick and CLIC Professional Validation datasets show that HCFSSNet achieves competitive rate distortion performance compared with recent SSM based codecs such as MambaIC, while using significantly fewer parameters. On Kodak, Tecnick and CLIC, HCFSSNet reduces BD rate over the VTM anchor by 18.06, 24.56 and 22.44 percent, respectively, providing an efficient and interpretable hybrid architecture for future learned image compression systems.