Brain MR Image Synthesis with Multi-contrast Self-attention GAN

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the clinical challenge of missing multimodal brain MRI sequences—such as T1c, T1n, T2, and T2f—due to scanning constraints, which hinders comprehensive brain tumor assessment. To overcome this limitation, the authors propose 3D-MC-SAGAN, a unified framework capable of synthesizing multiple missing modalities from a single T2 image. Built upon a 3D encoder-decoder architecture, the method incorporates Memory-Bounded Hybrid Attention to efficiently capture long-range dependencies and integrates a frozen U-Net segmentation module with a multi-task loss combining adversarial, reconstruction, perceptual, SSIM, and segmentation consistency constraints. This ensures high-fidelity synthesis while preserving tumor structure. Experimental results demonstrate state-of-the-art performance on 3D brain MRI, yielding anatomically plausible and visually coherent images whose quality supports tumor segmentation accuracy comparable to that achieved with real multimodal inputs, thereby substantially reducing clinical scanning burden.

Technology Category

Application Category

📝 Abstract
Accurate and complete multi-modal Magnetic Resonance Imaging (MRI) is essential for neuro-oncological assessment, as each contrast provides complementary anatomical and pathological information. However, acquiring all modalities (e.g., T1c, T1n, T2, T2f) for every patient is often impractical due to time, cost, and patient discomfort, potentially limiting comprehensive tumour evaluation. We propose 3D-MC-SAGAN (3D Multi-Contrast Self-Attention generative adversarial network), a unified 3D multi-contrast synthesis framework that generates high-fidelity missing modalities from a single T2 input while explicitly preserving tumour characteristics. The model employs a multi-scale 3D encoder-decoder generator with residual connections and a novel Memory-Bounded Hybrid Attention (MBHA) block to capture long-range dependencies efficiently, and is trained with a WGAN-GP critic and an auxiliary contrast-conditioning branch to produce T2f, T1n, and T1c volumes within a single unified network. A frozen 3D U-Net-based segmentation module introduces a segmentation-consistency constraint to preserve lesion morphology. The composite objective integrates adversarial, reconstruction, perceptual, structural similarity, contrast-classification, and segmentation-guided losses to align global realism with tumour-preserving structure. Extensive evaluation on 3D brain MRI datasets demonstrates that 3D-MC-SAGAN achieves state-of-the-art quantitative performance and generates visually coherent, anatomically plausible contrasts with improved distribution-level realism. Moreover, it maintains tumour segmentation accuracy comparable to fully acquired multi-modal inputs, highlighting its potential to reduce acquisition burden while preserving clinically meaningful information.
Problem

Research questions and friction points this paper is trying to address.

multi-modal MRI
missing modality
brain tumor assessment
image synthesis
acquisition burden
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-contrast MRI synthesis
Self-attention GAN
Memory-Bounded Hybrid Attention
Segmentation-consistency constraint
3D generative model
🔎 Similar Papers
No similar papers found.
Zaid A. Abod
Zaid A. Abod
lecturer of computer science, Al Qasim Green University
Data SecurityQuantumImage ProcessingComputer GraphicsData Mining
F
Furqan Aziz
School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom