SegRap2025: A Benchmark of Gross Tumor Volume and Lymph Node Clinical Target Volume Segmentation for Radiotherapy Planning of Nasopharyngeal Carcinoma

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of generalizing automatic segmentation of the gross tumor volume (GTV) and lymph node clinical target volume (LN CTV) in nasopharyngeal carcinoma radiotherapy across multi-center, multi-modal CT imaging. It establishes the first multi-center benchmark encompassing both non-contrast and contrast-enhanced CT scans and introduces a clinically realistic mixed-data setting involving single- and dual-modality inputs. Within a unified deep learning framework, the robustness of multiple participating models is systematically evaluated under cross-center and cross-modality conditions. Results show that GTV segmentation achieves Dice similarity coefficients (DSC) of up to 74.61% and 56.79% on internal and external test sets, respectively, while LN CTV segmentation yields DSCs of 60.24%, 60.50%, and 57.23% on paired CT, contrast-enhanced-only, and non-contrast-only subsets, respectively—providing the first comprehensive insight into the generalization limitations of current methods in complex clinical settings.

Technology Category

Application Category

📝 Abstract
Accurate delineation of Gross Tumor Volume (GTV), Lymph Node Clinical Target Volume (LN CTV), and Organ-at-Risk (OAR) from Computed Tomography (CT) scans is essential for precise radiotherapy planning in Nasopharyngeal Carcinoma (NPC). Building upon SegRap2023, which focused on OAR and GTV segmentation using single-center paired non-contrast CT (ncCT) and contrast-enhanced CT (ceCT) scans, the SegRap2025 challenge aims to enhance the generalizability and robustness of segmentation models across imaging centers and modalities. SegRap2025 comprises two tasks: Task01 addresses GTV segmentation using paired CT from the SegRap2023 dataset, with an additional external testing set to evaluate cross-center generalization, and Task02 focuses on LN CTV segmentation using multi-center training data and an unseen external testing set, where each case contains paired CT scans or a single modality, emphasizing both cross-center and cross-modality robustness. This paper presents the challenge setup and provides a comprehensive analysis of the solutions submitted by ten participating teams. For GTV segmentation task, the top-performing models achieved average Dice Similarity Coefficient (DSC) of 74.61% and 56.79% on the internal and external testing cohorts, respectively. For LN CTV segmentation task, the highest average DSC values reached 60.24%, 60.50%, and 57.23% on paired CT, ceCT-only, and ncCT-only subsets, respectively. SegRap2025 establishes a large-scale multi-center, multi-modality benchmark for evaluating the generalization and robustness in radiotherapy target segmentation, providing valuable insights toward clinically applicable automated radiotherapy planning systems. The benchmark is available at: https://hilab-git.github.io/SegRap2025_Challenge.
Problem

Research questions and friction points this paper is trying to address.

Gross Tumor Volume
Lymph Node Clinical Target Volume
radiotherapy planning
multi-center
multi-modality
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-center
multi-modality
cross-modality robustness
radiotherapy target segmentation
generalization benchmark
🔎 Similar Papers
No similar papers found.
Jia Fu
Jia Fu
RISE Research Institutes of Sweden, KTH Royal Institute of Technology
Robust Artificial IntelligenceMultimodal Machine LearningApplied Computer Vision
L
Litingyu Wang
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.
H
He Li
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.
Zihao Luo
Zihao Luo
University of Electronic Science and Technology of China | Shanghai Innovation Institute
Medical Image AnalysisFoundation ModelAI for Science
H
Huamin Wang
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China.
C
Chenyuan Bian
The Affiliated Hospital of Qingdao University, Qingdao, China.
Zijun Gao
Zijun Gao
The Chinese University of Hong Kong
AI for ScienceLLM
C
Chunbin Gu
Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
X
Xin Weng
Bank of China, Cangshan Sub-branch, Fujian Branch, Fuzhou, China.
Jianghao Wu
Jianghao Wu
Monash University
Medical Image AnalysisComputer VisionNatural Language Processing
Yicheng Wu
Yicheng Wu
Imperial College London
Computer VisionMedical Image AnalysisGenerative AI
Jin Ye
Jin Ye
Monash University
Medical AIComputer Vision
L
Linhao Li
School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an, China.
Y
Yiwen Ye
School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an, China.
Yong Xia
Yong Xia
Northwestern Polytechnical University
image processingmedical image analysiscomputer-aided diagnosispattern recognitionmachine learning
Elias Tappeiner
Elias Tappeiner
Researcher, UMIT - Private University for Health Sciences, Medical Informatics and Technology
machine learningmedical image segmentation
Fei He
Fei He
Department of Industrial Engineering and Operations Research, Columbia University
Applied probabilityRobust OptimizationSimulationMachine Learning
Abdul Qayyum
Abdul Qayyum
Imperial College London, UK
Machine and Deep LearningBiomedical Signals and ImagingCardiac Digital Twinquantum ML
Moona Mazher
Moona Mazher
University College London, UK
Medical Image AnalysisDeep LearningEEG signal processingMachine LearningBrain signal
Steven Niederer
Steven Niederer
Imperial College London
CardiacModelling
J
Junqiang Chen
Shanghai MediWorks Precision Instruments Co., Ltd., China.
C
Chuanyi Huang
School of Automation and Intelligent Sensing, Shanghai Jiao Tong University, Shanghai, China.
L
Lisheng Wang
School of Automation and Intelligent Sensing, Shanghai Jiao Tong University, Shanghai, China.
Zhaohu Xing
Zhaohu Xing
Hong Kong University of Science and Technology (Guangzhou)
Medical Image AnalysisVideo UnderstandingImage Generation
Hongqiu Wang
Hongqiu Wang
Hong Kong University of Science and Technology (Guangzhou)
AI for healthcareLabel-efficient learningMulti-modal learningFairnessMLLM
Lei Zhu
Lei Zhu
Hong Kong University of Science and Technology
Computational photographyVisionImage and video processingImage restorationhealthcare
S
Shichuan Zhang
Department of Radiation Oncology, Sichuan Cancer Center, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China.
Shaoting Zhang
Shaoting Zhang
Shanghai AI Lab; SenseTime Research
Medical Image AnalysisComputer VisionFoundation Models
W
Wenjun Liao
Department of Radiation Oncology, Sichuan Cancer Center, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, China.
Guotai Wang
Guotai Wang
Professor, University of Electronic Science and Technology of China
medical image analysiscomputer visiondeep learning