Cross Modality Medical Image Synthesis for Improving Liver Segmentation

📅 2025-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance bottleneck of liver segmentation models caused by scarce annotated medical imaging data, this paper proposes a two-stage cross-modal image synthesis framework that unpairedly translates abdominal CT scans into MRI-like images to augment training data. We introduce EssNet—a deformation-invariant network—designed to overcome anatomical misalignment and distortion induced by bidirectional constraints in CycleGAN-based cross-modal synthesis. Furthermore, we propose, for the first time, a CT→MRI unidirectional enhancement paradigm specifically tailored for segmentation tasks. By fine-tuning a U-Net on a hybrid dataset comprising synthesized MRI and real CT images, our method achieves a 1.17% IoU improvement on public liver segmentation benchmarks. This demonstrates significant alleviation of the small-sample limitation and validates the efficacy and practicality of synthetically generated data for downstream segmentation tasks.

Technology Category

Application Category

📝 Abstract
Deep learning-based computer-aided diagnosis (CAD) of medical images requires large datasets. However, the lack of large publicly available labeled datasets limits the development of deep learning-based CAD systems. Generative Adversarial Networks (GANs), in particular, CycleGAN, can be used to generate new cross-domain images without paired training data. However, most CycleGAN-based synthesis methods lack the potential to overcome alignment and asymmetry between the input and generated data. We propose a two-stage technique for the synthesis of abdominal MRI using cross-modality translation of abdominal CT. We show that the synthetic data can help improve the performance of the liver segmentation network. We increase the number of abdominal MRI images through cross-modality image transformation of unpaired CT images using a CycleGAN inspired deformation invariant network called EssNet. Subsequently, we combine the synthetic MRI images with the original MRI images and use them to improve the accuracy of the U-Net on a liver segmentation task. We train the U-Net on real MRI images and then on real and synthetic MRI images. Consequently, by comparing both scenarios, we achieve an improvement in the performance of U-Net. In summary, the improvement achieved in the Intersection over Union (IoU) is 1.17%. The results show potential to address the data scarcity challenge in medical imaging.
Problem

Research questions and friction points this paper is trying to address.

Addresses lack of large labeled medical datasets for deep learning.
Improves liver segmentation using synthetic MRI from CT images.
Enhances U-Net performance with cross-modality image synthesis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

CycleGAN for cross-modality image synthesis
EssNet for deformation invariant MRI generation
U-Net enhanced with synthetic MRI data
🔎 Similar Papers
No similar papers found.
M
Muhammad Rafiq
Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan.
Hazrat Ali
Hazrat Ali
University of Stirling
Artificial IntellienceGenerative AIMedical AIHealthcare
Ghulam Mujtaba
Ghulam Mujtaba
Anderson College of Business and Computing, Regis University
Z
Zubair Shah
College of Science and Engineering, Hamad Bin Khalifa University, Education City, Doha, Qatar.
Shoaib Azmat
Shoaib Azmat
Associate Professor of Computer Engineering, COMSATS University
Computer VisionImage ProcessingMachine LearningHardware Acceleration