🤖 AI Summary
This work addresses the limited availability of public MRI datasets beyond the brain and knee, which has hindered systematic investigation into the generalization capabilities of deep learning models for musculoskeletal (MSK) imaging. To bridge this gap, the authors introduce MosaicMRI—the largest open-source raw MSK MRI dataset to date—encompassing diverse anatomical regions, contrast weightings, scan orientations, and coil configurations. Using VarNet as a baseline, they conduct accelerated reconstruction experiments and perform controlled ablation studies to analyze cross-anatomical transferability. Their findings reveal, for the first time, that multi-anatomy joint training substantially outperforms single-anatomy training in low-data regimes, with particularly pronounced gains in underrepresented regions such as the foot and elbow. The results demonstrate that model performance is jointly influenced by data volume, anatomical type, and imaging protocol, underscoring the critical role of anatomical diversity in enhancing model generalization.
📝 Abstract
Deep learning underpins a wide range of applications in MRI, including reconstruction, artifact removal, and segmentation. However, progress has been driven largely by public datasets focused on brain and knee imaging, shaping how models are trained and evaluated. As a result, careful studies of the reliability of these models across diverse anatomical settings remain limited. In this work, we introduce MosaicMRI, a large and diverse collection of fully sampled raw musculoskeletal (MSK) MR measurements designed for training and evaluating machine-learning-based methods. MosaicMRI is the largest open-source raw MSK MRI dataset to date, comprising 2,671 volumes and 80,156 slices. The dataset offers substantial diversity in volume orientation (e.g., axial, sagittal), imaging contrasts (e.g., PD, T1, T2), anatomies (e.g., spine, knee, hip, ankle, and others), and numbers of acquisition coils. Using VarNet as a baseline for accelerated reconstruction task, we perform a comprehensive set of experiments to study scaling behavior with respect to both model capacity and dataset size. Interestingly, models trained on the combined anatomies significantly outperform anatomy-specific models in low-sample regimes, highlighting the benefits of anatomical diversity and the presence of exploitable cross-anatomical correlations. We further evaluate robustness and cross-anatomy generalization by training models on one anatomy (e.g., spine) and testing them on another (e.g., knee). Notably, we identify groups of body parts (e.g., foot and elbow) that generalize well with each other, and highlight that performance under domain shifts depends on both training set size, anatomy, and protocol-specific factors.