π€ AI Summary
Existing deep unfolding networks (DUNs) are confined to image-domain unfolding, resulting in weak prior modeling and insufficient detail recovery. To address this, we propose a dual-domain deep unfolding framework that jointly models priors in both the image domain and a learnable convolutional coding domainβmarking the first integration of convolutional coding priors into DUN architectures. This enables adaptive cross-domain feature propagation. The method achieves high reconstruction accuracy, low computational complexity, and strong interpretability, and is applicable to 2D/3D natural images, medical imaging, and scientific signals. Extensive experiments on both simulated and real-world data demonstrate average PSNR gains of 1.2β2.8 dB over state-of-the-art methods. The source code is publicly available and empirically validated for practical deployment.
π Abstract
By mapping iterative optimization algorithms into neural networks (NNs), deep unfolding networks (DUNs) exhibit well-defined and interpretable structures and achieve remarkable success in the field of compressive sensing (CS). However, most existing DUNs solely rely on the image-domain unfolding, which restricts the information transmission capacity and reconstruction flexibility, leading to their loss of image details and unsatisfactory performance. To overcome these limitations, this paper develops a dual-domain optimization framework that combines the priors of (1) image- and (2) convolutional-coding-domains and offers generality to CS and other inverse imaging tasks. By converting this optimization framework into deep NN structures, we present a Dual-Domain Deep Convolutional Coding Network (D3C2-Net), which enjoys the ability to efficiently transmit high-capacity self-adaptive convolutional features across all its unfolded stages. Our theoretical analyses and experiments on simulated and real captured data, covering 2D and 3D natural, medical, and scientific signals, demonstrate the effectiveness, practicality, superior performance, and generalization ability of our method over other competing approaches and its significant potential in achieving a balance among accuracy, complexity, and interpretability. Code is available at https://github.com/lwq20020127/D3C2-Net.