🤖 AI Summary
Highly accelerated magnetic resonance fingerprinting (MRF) imaging is prone to undersampling aliasing artifacts and lacks large-scale training data with quantitative ground truth. To address these challenges, this work proposes MRI2Qmap, a novel framework that, for the first time, integrates a physics-driven compressed sensing model with a deep denoising autoencoder pretrained on large-scale conventional weighted MRI data. By employing a plug-and-play optimization strategy, the method enables quantitative multi-parameter map reconstruction without requiring quantitative ground truth labels. It effectively leverages anatomical priors embedded in routine clinical MRI scans, significantly improving MRF reconstruction quality. Evaluated on highly accelerated 3D whole-brain MRF data, the proposed approach achieves performance comparable to or better than existing methods, thereby overcoming the dependency on quantitative ground truth for training.
📝 Abstract
Magnetic Resonance Fingerprinting (MRF) and other highly accelerated transient-state parameter mapping techniques enable simultaneous quantification of multiple tissue properties, but often suffer from aliasing artifacts due to compressed sampling. Incorporating spatial image priors can mitigate these artifacts, and deep learning has shown strong potential when large training datasets are available. However, extending this paradigm to MRF-type sequences remains challenging due to the scarcity of quantitative imaging data for training. Can this limitation be overcome by leveraging sources of training data from clinically-routine weighted MRI images? To this end, we introduce MRI2Qmap, a plug-and-play quantitative reconstruction framework that integrates the physical acquisition model with priors learned from deep denoising autoencoders pretrained on large multimodal weighted-MRI datasets. MRI2Qmap demonstrates that spatial-domain structural priors learned from independently acquired datasets of routine weighted-MRI images can be effectively used for quantitative MRI reconstruction. The proposed method is validated on highly accelerated 3D whole-brain MRF data from both in-vivo and simulated acquisitions, achieving competitive or superior performance relative to existing baselines without requiring ground-truth quantitative imaging data for training. By decoupling quantitative reconstruction from the need for ground-truth MRF training data, this framework points toward a scalable paradigm for quantitative MRI that can capitalize on the large and growing repositories of routine clinical MRI.