🤖 AI Summary
In cervical cancer radiotherapy planning, manual delineation of structures on T2-weighted MRI is time-consuming and suffers from poor inter-observer consistency. To address this, we propose a two-stage automatic segmentation framework based on the lightweight PocketNet, enabling the first simultaneous, accurate segmentation of the cervix, vagina, uterus, and tumor. The method employs a coarse-to-fine strategy: anatomical region localization followed by fine-grained structure segmentation, optimized using Dice loss and five-fold cross-validation on multi-center T2w MRI data. Compared to existing approaches, our framework achieves superior generalizability across diverse scanning protocols and higher computational efficiency for clinical deployment. Quantitative evaluation yields a mean Dice score of 70.3% for tumor segmentation and >80.5% for organ segmentation, demonstrating robustness and reproducibility. These results indicate strong potential for clinical translation in routine radiotherapy planning workflows.
📝 Abstract
Cervical cancer remains the fourth most common malignancy amongst women worldwide.1 Concurrent chemoradiotherapy (CRT) serves as the mainstay definitive treatment regimen for locally advanced cervical cancers and includes external beam radiation followed by brachytherapy.2 Integral to radiotherapy treatment planning is the routine contouring of both the target tumor at the level of the cervix, associated gynecologic anatomy and the adjacent organs at risk (OARs). However, manual contouring of these structures is both time and labor intensive and associated with known interobserver variability that can impact treatment outcomes. While multiple tools have been developed to automatically segment OARs and the high-risk clinical tumor volume (HR-CTV) using computed tomography (CT) images,3,4,5,6 the development of deep learning-based tumor segmentation tools using routine T2-weighted (T2w) magnetic resonance imaging (MRI) addresses an unmet clinical need to improve the routine contouring of both anatomical structures and cervical cancers, thereby increasing quality and consistency of radiotherapy planning. This work applied a novel deep-learning model (PocketNet) to segment the cervix, vagina, uterus, and tumor(s) on T2w MRI. The performance of the PocketNet architecture was evaluated, when trained on data via 5-fold cross validation. PocketNet achieved a mean Dice-Sorensen similarity coefficient (DSC) exceeding 70% for tumor segmentation and 80% for organ segmentation. These results suggest that PocketNet is robust to variations in contrast protocols, providing reliable segmentation of the regions of interest.