D3C2-Net: Dual-Domain Deep Convolutional Coding Network for Compressive Sensing

πŸ“… 2022-07-27
πŸ›οΈ IEEE transactions on circuits and systems for video technology (Print)
πŸ“ˆ Citations: 11
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing deep unfolding networks (DUNs) are confined to image-domain unfolding, resulting in weak prior modeling and insufficient detail recovery. To address this, we propose a dual-domain deep unfolding framework that jointly models priors in both the image domain and a learnable convolutional coding domainβ€”marking the first integration of convolutional coding priors into DUN architectures. This enables adaptive cross-domain feature propagation. The method achieves high reconstruction accuracy, low computational complexity, and strong interpretability, and is applicable to 2D/3D natural images, medical imaging, and scientific signals. Extensive experiments on both simulated and real-world data demonstrate average PSNR gains of 1.2–2.8 dB over state-of-the-art methods. The source code is publicly available and empirically validated for practical deployment.
πŸ“ Abstract
By mapping iterative optimization algorithms into neural networks (NNs), deep unfolding networks (DUNs) exhibit well-defined and interpretable structures and achieve remarkable success in the field of compressive sensing (CS). However, most existing DUNs solely rely on the image-domain unfolding, which restricts the information transmission capacity and reconstruction flexibility, leading to their loss of image details and unsatisfactory performance. To overcome these limitations, this paper develops a dual-domain optimization framework that combines the priors of (1) image- and (2) convolutional-coding-domains and offers generality to CS and other inverse imaging tasks. By converting this optimization framework into deep NN structures, we present a Dual-Domain Deep Convolutional Coding Network (D3C2-Net), which enjoys the ability to efficiently transmit high-capacity self-adaptive convolutional features across all its unfolded stages. Our theoretical analyses and experiments on simulated and real captured data, covering 2D and 3D natural, medical, and scientific signals, demonstrate the effectiveness, practicality, superior performance, and generalization ability of our method over other competing approaches and its significant potential in achieving a balance among accuracy, complexity, and interpretability. Code is available at https://github.com/lwq20020127/D3C2-Net.
Problem

Research questions and friction points this paper is trying to address.

Enhances compressive sensing via dual-domain optimization framework
Improves image detail retention and reconstruction flexibility
Balances accuracy, complexity, and interpretability in inverse imaging tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-domain optimization framework for CS
Deep unfolding network with interpretable structures
High-capacity self-adaptive convolutional features
πŸ”Ž Similar Papers
No similar papers found.
W
Weiqi Li
School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
B
Bin Chen
School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China
S
Shuai Liu
Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China
S
Shijie Zhao
ByteDance Inc, Shenzhen 518055, China
Bowen Du
Bowen Du
Beihang University
Y
Yongbing Zhang
School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen 518055, China
J
Jian Zhang
School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China