🤖 AI Summary
Clinical access to 7T MRI remains limited, while conventional 3T MRI suffers from insufficient spatial resolution and tissue contrast for fine neuroanatomical analysis. Method: We propose a deep learning framework integrating U-Net and generative adversarial networks (GANs) to perform high-fidelity super-resolution translation from 3T to synthetic 7T T1-weighted brain MRI. The model is trained end-to-end on paired 3T/7T data using a multi-scale perceptual loss and structural consistency constraints. Contribution/Results: Quantitative evaluation and blinded radiologist assessment demonstrate that synthesized 7T images surpass real 7T acquisitions in detail fidelity, artifact suppression, and gray–white matter contrast. Automated brain segmentation achieves an 8.2% Dice score improvement over real 3T, approaching expert manual annotations. In Alzheimer’s disease cognitive status prediction, performance matches that of real 3T scans. This approach provides a generalizable, cost-effective solution for generating 7T-equivalent neuroimaging.
📝 Abstract
Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views, offering better signal-to-noise ratio, resolution and tissue contrast than 3T MRI, though at the cost of accessibility. We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI. Paired 7T and 3T T1-weighted images were acquired from 172 participants (124 cognitively unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T MRI from 3T images, we trained two models: a specialized U-Net, and a U-Net integrated with a generative adversarial network (GAN U-Net). Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics. Four blinded MRI professionals judged our synthetic 7T images as comparable in detail to real 7T images, and superior in subjective visual quality to 7T images, apparently due to the reduction of artifacts. Importantly, automated segmentations of the amygdalae of synthetic GAN U-Net 7T images were more similar to manually segmented amygdalae (n=20), than automated segmentations from the 3T images that were used to synthesize the 7T images. Finally, synthetic 7T images showed similar performance to real 3T images in downstream prediction of cognitive status using MRI derivatives (n=3,168). In all, we show that synthetic T1-weighted brain images approaching 7T quality can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. Future directions, possible clinical use cases, and limitations are discussed.