From Healthy Scans to Annotated Tumors: A Tumor Fabrication Framework for 3D Brain MRI Synthesis

๐Ÿ“… 2025-11-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Limited annotated MRI data for brain tumors severely hampers the training of automated segmentation models. Existing synthesis methods either rely on expert-designed priors or require large paired datasetsโ€”both impractical in clinical low-resource settings. This paper proposes the first unpaired two-stage 3D brain tumor synthesis framework: Stage I generates coarse-grained tumor structures from healthy brain scans; Stage II refines these using only a few annotated real images to jointly optimize lesion morphology and anatomical consistency. To our knowledge, this is the first approach integrating unpaired learning with generative modeling for medical image synthesis, requiring only healthy scans and minimal annotations to produce high-fidelity synthetic paired data. Experiments under low-data regimes demonstrate that the synthesized data significantly improves downstream segmentation performance (average Dice score increase of 4.2%), validating its clinical scalability and practical utility.

Technology Category

Application Category

๐Ÿ“ Abstract
The scarcity of annotated Magnetic Resonance Imaging (MRI) tumor data presents a major obstacle to accurate and automated tumor segmentation. While existing data synthesis methods offer promising solutions, they often suffer from key limitations: manual modeling is labor intensive and requires expert knowledge. Deep generative models may be used to augment data and annotation, but they typically demand large amounts of training pairs in the first place, which is impractical in data limited clinical settings. In this work, we propose Tumor Fabrication (TF), a novel two-stage framework for unpaired 3D brain tumor synthesis. The framework comprises a coarse tumor synthesis process followed by a refinement process powered by a generative model. TF is fully automated and leverages only healthy image scans along with a limited amount of real annotated data to synthesize large volumes of paired synthetic data for enriching downstream supervised segmentation training. We demonstrate that our synthetic image-label pairs used as data enrichment can significantly improve performance on downstream tumor segmentation tasks in low-data regimes, offering a scalable and reliable solution for medical image enrichment and addressing critical challenges in data scarcity for clinical AI applications.
Problem

Research questions and friction points this paper is trying to address.

Automated tumor segmentation faces annotated MRI data scarcity
Existing synthesis methods require intensive labor or abundant training pairs
Clinical settings need scalable solutions for limited data scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework for unpaired tumor synthesis
Generative model refines coarse tumor generation
Automated synthesis from healthy scans and limited data
๐Ÿ”Ž Similar Papers
No similar papers found.
N
Nayu Dong
Australian Institute for Machine Learning, The University of Adelaide, Australia
T
Townim Chowdhury
Australian Institute for Machine Learning, The University of Adelaide, Australia
H
Hieu Phan
Australian Institute for Machine Learning, The University of Adelaide, Australia
Mark Jenkinson
Mark Jenkinson
Professor of Neuroimaging
medical image analysisneuroimagingdeep learning
J
Johan Verjans
Australian Institute for Machine Learning, The University of Adelaide, Australia
Zhibin Liao
Zhibin Liao
School of Computer and Mathematical Sciences, University of Adelaide
Deep LearningMachine LearningMedical Image Analysis