A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low accuracy of automatic pheochromocytoma (PCC) segmentation in abdominal contrast-enhanced CT, which hinders tumor burden quantification and clinical decision-making. To overcome this, we propose an anatomy-informed multi-class labeling strategy. Specifically, we introduce a novel 3D labeling scheme—Tumor + Kidney + Aorta (TKA)—that explicitly models spatial relationships between PCCs and critical adjacent organs, outperforming conventional Tumor + Body labeling. Built upon the nnU-Net framework, our method is trained and validated on 105 high-quality contrast-enhanced CT volumes. Evaluation employs Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-level F1 score. The TKA strategy achieves statistically significant improvements across all metrics, attaining an R² of 0.968 for tumor volume prediction, robust five-fold cross-validation performance, and strong generalizability across SDHB/SDHD genetic subtypes. This provides a reliable imaging biomarker to support precision risk stratification, potentially reducing reliance on invasive genetic testing.

Technology Category

Application Category

📝 Abstract
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context in deep learning models to achieve precise PCC segmentation, supporting clinical assessment and longitudinal monitoring.
Problem

Research questions and friction points this paper is trying to address.

Improving deep learning-based segmentation of pheochromocytoma in CT scans
Evaluating anatomical priors for accurate 3D tumor segmentation
Enhancing tumor burden estimation and genetic subtype analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses nnU-Net for 3D PCC segmentation
Introduces multi-class organ-specific anatomical priors
TKA annotation achieves highest segmentation accuracy
🔎 Similar Papers
No similar papers found.
Tanjin Taher Toma
Tanjin Taher Toma
University of Virginia
Image ProcessingDeep LearningSegmentationTrackingReconstruction
Tejas Sudharshan Mathai
Tejas Sudharshan Mathai
Associate Scientist, NIH
Medical ImagingImage-Guided SurgeryDeep LearningComputer VisionMachine Learning
Bikash Santra
Bikash Santra
IIT Jodhpur
Computer VisionMachine LearningMedical Image Analysis
Pritam Mukherjee
Pritam Mukherjee
National Institutes of Health Clinical Center
machine learning for healthcaremedical imaging
Jianfei Liu
Jianfei Liu
National Institutes of Health
Medical Image AnalysisComputer Vision
W
Wesley Jong
Department of Radiology, The George Washington University School of Medicine and Health Sciences, Washington, DC, USA.
D
Darwish Alabyad
Department of Radiology, The George Washington University School of Medicine and Health Sciences, Washington, DC, USA.
V
Vivek Batheja
Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, NIH, Bethesda, MD, USA.
A
Abhishek Jha
Eunice Kennedy Shriver National Institute of Child Health and Human Development, NIH, Bethesda, MD, USA.
Mayank Patel
Mayank Patel
PhD Student, Purdue University
Deep LearningComputational DesignHuman-Computer Interaction
D
Darko Pucar
Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, NIH, Bethesda, MD, USA.
J
Jayadira del Rivero
National Cancer Institute, NIH, Bethesda, MD, USA.
Karel Pacak
Karel Pacak
Unknown affiliation
R
Ronald M. Summers
Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Clinical Center, NIH, Bethesda, MD, USA.