🤖 AI Summary
This work proposes a hybrid model-driven and data-driven convolutional sparse coding framework to address the limited interpretability and robustness in image reconstruction. The method leverages a neural network to generate spatially adaptive sparsity maps, which are integrated into a model-based regularization scheme. By incorporating filter permutation invariance, the architecture enables flexible replacement of convolutional dictionaries during inference without reliance on specific training data. Evaluated on low-field MRI reconstruction, the proposed approach significantly outperforms existing deep learning methods, demonstrating superior generalization, robustness, and adaptability on both in-distribution and out-of-distribution data.
📝 Abstract
State-of-the-art learned reconstruction methods often rely on black-box modules that, despite their strong performance, raise questions about their interpretability and robustness. Here, we build on a recently proposed image reconstruction method, which is based on embedding data-driven information into a model-based convolutional dictionary regularization via neural network-inferred spatially adaptive sparsity level maps. By means of improved network design and dedicated training strategies, we extend the method to achieve filter-permutation invariance as well as the possibility to change the convolutional dictionary at inference time. We apply our method to low-field MRI and compare it to several other recent deep learning-based methods, also on in vivo data, in which the benefit for the use of a different dictionary is showcased. We further assess the method's robustness when tested on in- and out-of-distribution data. When tested on the latter, the proposed method suffers less from the data distribution shift compared to the other learned methods, which we attribute to its reduced reliance on training data due to its underlying model-based reconstruction component.