Two Stage Segmentation of Cervical Tumors using PocketNet

📅 2024-09-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In cervical cancer radiotherapy planning, manual delineation of structures on T2-weighted MRI is time-consuming and suffers from poor inter-observer consistency. To address this, we propose a two-stage automatic segmentation framework based on the lightweight PocketNet, enabling the first simultaneous, accurate segmentation of the cervix, vagina, uterus, and tumor. The method employs a coarse-to-fine strategy: anatomical region localization followed by fine-grained structure segmentation, optimized using Dice loss and five-fold cross-validation on multi-center T2w MRI data. Compared to existing approaches, our framework achieves superior generalizability across diverse scanning protocols and higher computational efficiency for clinical deployment. Quantitative evaluation yields a mean Dice score of 70.3% for tumor segmentation and >80.5% for organ segmentation, demonstrating robustness and reproducibility. These results indicate strong potential for clinical translation in routine radiotherapy planning workflows.

Technology Category

Application Category

📝 Abstract
Cervical cancer remains the fourth most common malignancy amongst women worldwide.1 Concurrent chemoradiotherapy (CRT) serves as the mainstay definitive treatment regimen for locally advanced cervical cancers and includes external beam radiation followed by brachytherapy.2 Integral to radiotherapy treatment planning is the routine contouring of both the target tumor at the level of the cervix, associated gynecologic anatomy and the adjacent organs at risk (OARs). However, manual contouring of these structures is both time and labor intensive and associated with known interobserver variability that can impact treatment outcomes. While multiple tools have been developed to automatically segment OARs and the high-risk clinical tumor volume (HR-CTV) using computed tomography (CT) images,3,4,5,6 the development of deep learning-based tumor segmentation tools using routine T2-weighted (T2w) magnetic resonance imaging (MRI) addresses an unmet clinical need to improve the routine contouring of both anatomical structures and cervical cancers, thereby increasing quality and consistency of radiotherapy planning. This work applied a novel deep-learning model (PocketNet) to segment the cervix, vagina, uterus, and tumor(s) on T2w MRI. The performance of the PocketNet architecture was evaluated, when trained on data via 5-fold cross validation. PocketNet achieved a mean Dice-Sorensen similarity coefficient (DSC) exceeding 70% for tumor segmentation and 80% for organ segmentation. These results suggest that PocketNet is robust to variations in contrast protocols, providing reliable segmentation of the regions of interest.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning
Cervical Cancer
MRI
Innovation

Methods, ideas, or system contributions that make the work stand out.

PocketNet
T2-weighted MRI
Deep Learning
🔎 Similar Papers
No similar papers found.
A
Awj Twam
Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
M
Megan C. Jacobsen
Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
R
Rachel Glenn
Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
A
Ann Klopp
Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
A
Aradhana M. Venkatesan
Department of Abdominal Imaging, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
David Fuentes
David Fuentes
Professor
Image ProcessingOptimizationFinite Element ModelingUncertainty Quantification