๐ค AI Summary
Current skull-stripping methods exhibit poor generalizability across multi-modal, multi-species, and pathological scenarios, and heavily rely on scarce, labor-intensive ground-truth annotations. To address this, we propose PUMBAโa novel framework enabling fully synthetic-data-driven training of a universal brain tissue segmentation model for the first time. PUMBA leverages generative modeling to synthesize paired, anatomically consistent 3D brain MRIs spanning multiple modalities (T1/T2/FLAIR) and species (human/mouse/monkey). It integrates unsupervised structural constraints with adversarial consistency regularization into a 3D U-Net architecture, eliminating dependence on real medical images or hand-crafted anatomical priors. Evaluated across diverse modalities, species, and pathological cases, PUMBA achieves Dice scores exceeding 94%โmatching fully supervised baselines while demonstrating significantly superior cross-domain generalization. This work establishes a new paradigm for generalizable, annotation-free medical image segmentation.
๐ Abstract
While many skull stripping algorithms have been developed for multi-modal and multi-species cases, there is still a lack of a fundamentally generalizable approach. We present PUMBA(PUrely synthetic Multimodal/species invariant Brain extrAction), a strategy to train a model for brain extraction with no real brain images or labels. Our results show that even without any real images or anatomical priors, the model achieves comparable accuracy in multi-modal, multi-species and pathological cases. This work presents a new direction of research for any generalizable medical image segmentation task.