Hybrid Convolution and Frequency State Space Network for Image Compression

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing learned image compression (LIC) methods face trade-offs: Transformers and state-space models (SSMs) excel at long-range modeling but often lose structural fidelity and neglect critical frequency characteristics, while CNNs lack global contextual awareness. To address this, we propose HCFSSNet—a hybrid architecture integrating convolutional priors with frequency-domain state-space modeling. Specifically, we design the Vectorized Frequency-State Space (VFSS) block, which combines omnidirectional neighborhood scanning with adaptive frequency-domain modulation, and introduce a frequency-aware Swin Transformer module to enhance frequency-selective attention and bit-allocation efficiency. The architecture jointly captures local high-frequency details and global low-frequency structures. Experimental results show that HCFSSNet achieves BD-rate reductions of 18.06%, 24.56%, and 22.44% over VTM on Kodak, Tecnick, and CLIC benchmarks, respectively—matching MambaIC’s performance with significantly fewer parameters—and substantially advances rate-distortion performance in learned image compression.

Technology Category

Application Category

📝 Abstract
Learned image compression (LIC) has recently benefited from Transformer based and state space model (SSM) based architectures. Convolutional neural networks (CNNs) effectively capture local high frequency details, whereas Transformers and SSMs provide strong long range modeling capabilities but may cause structural information loss or ignore frequency characteristics that are crucial for compression. In this work we propose HCFSSNet, a Hybrid Convolution and Frequency State Space Network for LIC. HCFSSNet uses CNNs to extract local high frequency structures and introduces a Vision Frequency State Space (VFSS) block that models long range low frequency information. The VFSS block combines an Omni directional Neighborhood State Space (VONSS) module, which scans features horizontally, vertically and diagonally, with an Adaptive Frequency Modulation Module (AFMM) that applies content adaptive weighting of discrete cosine transform frequency components for more efficient bit allocation. To further reduce redundancy in the entropy model, we integrate AFMM with a Swin Transformer to form a Frequency Swin Transformer Attention Module (FSTAM) for frequency aware side information modeling. Experiments on the Kodak, Tecnick and CLIC Professional Validation datasets show that HCFSSNet achieves competitive rate distortion performance compared with recent SSM based codecs such as MambaIC, while using significantly fewer parameters. On Kodak, Tecnick and CLIC, HCFSSNet reduces BD rate over the VTM anchor by 18.06, 24.56 and 22.44 percent, respectively, providing an efficient and interpretable hybrid architecture for future learned image compression systems.
Problem

Research questions and friction points this paper is trying to address.

Hybrid network combines convolution and frequency state space for image compression
Models local high frequency details and long range low frequency information
Reduces redundancy in entropy model with frequency aware side information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid CNN and frequency state space network architecture
Vision Frequency State Space block with omnidirectional scanning
Adaptive Frequency Modulation for content-aware bit allocation
🔎 Similar Papers
No similar papers found.
H
Haodong Pan
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, 710049, Shaanxi, China
H
Hao Wei
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, 710049, Shaanxi, China
Yusong Wang
Yusong Wang
Tokyo Institute of Technology
Representation LearningAffective Computing
Nanning Zheng
Nanning Zheng
Xi'an Jiaotong University
Caigui Jiang
Caigui Jiang
Xi'an Jiaotong University
Computer graphicsarchitectural geometry