🤖 AI Summary
Poor generalizability across MRI and CT modalities—exacerbated by anatomical variability and modality-specific intensity distributions—hampers 3D segmentation of female pelvic floor structures. To address this, we propose a novel 3D Vision Transformer (ViT) architecture incorporating Rotary Position Embedding (RoPE), integrated with self-supervised contrastive pretraining and a multi-center, multi-modal pretrain-fine-tune paradigm to jointly enforce cross-modality feature alignment and model long-range spatial dependencies. To our knowledge, this is the first work to introduce RoPE into Transformer-based cross-modality medical image segmentation. Evaluated on multi-center MRI/CT test sets, our method achieves a mean Dice coefficient improvement of 4.2% over state-of-the-art approaches, demonstrating superior generalizability and clinical deployability.
📝 Abstract
Deep learning-based segmentation of genito-pelvic structures in MRI and CT is crucial for applications such as radiation therapy, surgical planning, and disease diagnosis. However, existing segmentation models often struggle with generalizability across imaging modalities, and anatomical variations. In this work, we propose RoMedFormer, a rotary-embedding transformer-based foundation model designed for 3D female genito-pelvic structure segmentation in both MRI and CT. RoMedFormer leverages self-supervised learning and rotary positional embeddings to enhance spatial feature representation and capture long-range dependencies in 3D medical data. We pre-train our model using a diverse dataset of 3D MRI and CT scans and fine-tune it for downstream segmentation tasks. Experimental results demonstrate that RoMedFormer achieves superior performance segmenting genito-pelvic organs. Our findings highlight the potential of transformer-based architectures in medical image segmentation and pave the way for more transferable segmentation frameworks.