Converting T1-weighted MRI from 3T to 7T quality using deep learning

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical access to 7T MRI remains limited, while conventional 3T MRI suffers from insufficient spatial resolution and tissue contrast for fine neuroanatomical analysis. Method: We propose a deep learning framework integrating U-Net and generative adversarial networks (GANs) to perform high-fidelity super-resolution translation from 3T to synthetic 7T T1-weighted brain MRI. The model is trained end-to-end on paired 3T/7T data using a multi-scale perceptual loss and structural consistency constraints. Contribution/Results: Quantitative evaluation and blinded radiologist assessment demonstrate that synthesized 7T images surpass real 7T acquisitions in detail fidelity, artifact suppression, and gray–white matter contrast. Automated brain segmentation achieves an 8.2% Dice score improvement over real 3T, approaching expert manual annotations. In Alzheimer’s disease cognitive status prediction, performance matches that of real 3T scans. This approach provides a generalizable, cost-effective solution for generating 7T-equivalent neuroimaging.

Technology Category

Application Category

📝 Abstract
Ultra-high resolution 7 tesla (7T) magnetic resonance imaging (MRI) provides detailed anatomical views, offering better signal-to-noise ratio, resolution and tissue contrast than 3T MRI, though at the cost of accessibility. We present an advanced deep learning model for synthesizing 7T brain MRI from 3T brain MRI. Paired 7T and 3T T1-weighted images were acquired from 172 participants (124 cognitively unimpaired, 48 impaired) from the Swedish BioFINDER-2 study. To synthesize 7T MRI from 3T images, we trained two models: a specialized U-Net, and a U-Net integrated with a generative adversarial network (GAN U-Net). Our models outperformed two additional state-of-the-art 3T-to-7T models in image-based evaluation metrics. Four blinded MRI professionals judged our synthetic 7T images as comparable in detail to real 7T images, and superior in subjective visual quality to 7T images, apparently due to the reduction of artifacts. Importantly, automated segmentations of the amygdalae of synthetic GAN U-Net 7T images were more similar to manually segmented amygdalae (n=20), than automated segmentations from the 3T images that were used to synthesize the 7T images. Finally, synthetic 7T images showed similar performance to real 3T images in downstream prediction of cognitive status using MRI derivatives (n=3,168). In all, we show that synthetic T1-weighted brain images approaching 7T quality can be generated from 3T images, which may improve image quality and segmentation, without compromising performance in downstream tasks. Future directions, possible clinical use cases, and limitations are discussed.
Problem

Research questions and friction points this paper is trying to address.

Convert 3T MRI to 7T quality using deep learning
Improve image quality and segmentation accuracy
Maintain performance in downstream cognitive tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning synthesizes 7T MRI from 3T images
U-Net and GAN U-Net models outperform existing methods
Synthetic 7T images improve segmentation and visual quality
🔎 Similar Papers
No similar papers found.
M
Malo Gicquel
Department of Clinical Sciences Malmö, SciLifeLab, Lund University, Lund, Sweden; Univ Rennes, CNRS, Inria, Inserm, IRISA UMR 6074, Empenn ERL U 1228, Rennes, France
R
Ruoyi Zhao
Department of Clinical Sciences Malmö, SciLifeLab, Lund University, Lund, Sweden
Anika Wuestefeld
Anika Wuestefeld
Clinical Memory Research Unit, Lund University, Sweden
Alzheimer's diseaseaginghippocampusneuropsychological assessment
Nicola Spotorno
Nicola Spotorno
Senior researcher, Clinical Memory Research Unit, Department of Clinical Sciences, Lund University
neurodegenerative diseaseimaging-geneticsneuroimaging
O
Olof Strandberg
Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden
Kalle Åström
Kalle Åström
Professor, Centre for Mathematical Sciences, Lund University, Sweden
computer visionmachine learningstructure from sound
Y
Yu Xiao
Department of Clinical Sciences Malmö, SciLifeLab, Lund University, Lund, Sweden
L
Laura EM Wisse
Diagnostic Radiology Unit, Department of Clinical Sciences Lund, Lund University, Lund, Sweden
D
Danielle van Westen
Diagnostic Radiology Unit, Department of Clinical Sciences Lund, Lund University, Lund, Sweden
R
Rik Ossenkoppele
Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden; Alzheimer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, The Netherlands; Amsterdam Neuroscience, Neurodegeneration, Amsterdam, The Netherlands
N
Niklas Mattsson-Carlgren
Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden; Memory Clinic, Skåne University Hospital, Malmö, Sweden
D
David Berron
Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden; German Center for Neurodegenerative Diseases, Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
Oskar Hansson
Oskar Hansson
Clinical Memory Research Unit, Department of Clinical Sciences Malmö, Lund University, Lund, Sweden
G
Gabrielle Flood
Centre for Mathematical Sciences, Lund University, Lund, Sweden; Visual Recognition Group, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
J
Jacob Vogel
Department of Clinical Sciences Malmö, SciLifeLab, Lund University, Lund, Sweden