🤖 AI Summary
MRI acceleration reconstruction suffers from reliance on scarce fully-sampled labeled data and information loss in existing self-supervised methods. Method: We propose an unsupervised dual-domain self-supervised framework. Its core innovations include: (1) a novel *revisitable dual-domain self-supervision* mechanism that avoids k-space re-partitioning and associated information loss; (2) a physics-informed deep unrolling network, DUN-CP-PPA, built upon the Chambolle–Pock proximal algorithm and incorporating a Spatial-Frequency Joint Feature Extraction (SFFE) module to jointly leverage imaging physics and image priors; and (3) a joint visibility constraint loss operating simultaneously in both k-space and image domains. Results: Evaluated on fastMRI and IXI datasets, our method significantly outperforms state-of-the-art unsupervised and weakly supervised approaches, achieving superior structural fidelity (SSIM) and fine-detail recovery—thereby eliminating dependence on costly fully-sampled ground-truth labels.
📝 Abstract
Magnetic Resonance Imaging (MRI) is widely used in clinical practice, but suffered from prolonged acquisition time. Although deep learning methods have been proposed to accelerate acquisition and demonstrate promising performance, they rely on high-quality fully-sampled datasets for training in a supervised manner. However, such datasets are time-consuming and expensive-to-collect, which constrains their broader applications. On the other hand, self-supervised methods offer an alternative by enabling learning from under-sampled data alone, but most existing methods rely on further partitioned under-sampled k-space data as model's input for training, resulting in a loss of valuable information. Additionally, their models have not fully incorporated image priors, leading to degraded reconstruction performance. In this paper, we propose a novel re-visible dual-domain self-supervised deep unfolding network to address these issues when only under-sampled datasets are available. Specifically, by incorporating re-visible dual-domain loss, all under-sampled k-space data are utilized during training to mitigate information loss caused by further partitioning. This design enables the model to implicitly adapt to all under-sampled k-space data as input. Additionally, we design a deep unfolding network based on Chambolle and Pock Proximal Point Algorithm (DUN-CP-PPA) to achieve end-to-end reconstruction, incorporating imaging physics and image priors to guide the reconstruction process. By employing a Spatial-Frequency Feature Extraction (SFFE) block to capture global and local feature representation, we enhance the model's efficiency to learn comprehensive image priors. Experiments conducted on the fastMRI and IXI datasets demonstrate that our method significantly outperforms state-of-the-art approaches in terms of reconstruction performance.