🤖 AI Summary
Addressing the clinical challenges in pancreatic cancer CT segmentation—namely, large inter-patient lesion morphological variability, low contrast between lesions and surrounding normal tissues, and poor cross-center generalizability—we propose a classification-regression collaborative generalization paradigm. This framework jointly performs pixel-wise semantic segmentation and lesion boundary spatial regression, establishing a task-mutual feedback supervision mechanism. We further introduce a novel dual self-supervised augmentation strategy operating simultaneously in feature space and output space to enhance model robustness against imaging protocol variations and lesion heterogeneity. Evaluated on heterogeneous CT data from three independent centers (594 cases), our method achieves an in-domain Dice score of 84.07% and improves cross-lesion generalization performance by 9.51%. The source code is publicly available.
📝 Abstract
Pancreatic cancer, characterized by its notable prevalence and mortality rates, demands accurate lesion delineation for effective diagnosis and therapeutic interventions. The generalizability of extant methods is frequently compromised due to the pronounced variability in imaging and the heterogeneous characteristics of pancreatic lesions, which may mimic normal tissues and exhibit significant inter-patient variability. Thus, we propose a generalization framework that synergizes pixel-level classification and regression tasks, to accurately delineate lesions and improve model stability. This framework not only seeks to align segmentation contours with actual lesions but also uses regression to elucidate spatial relationships between diseased and normal tissues, thereby improving tumor localization and morphological characterization. Enhanced by the reciprocal transformation of task outputs, our approach integrates additional regression supervision within the segmentation context, bolstering the model's generalization ability from a dual-task perspective. Besides, dual self-supervised learning in feature spaces and output spaces augments the model's representational capability and stability across different imaging views. Experiments on 594 samples composed of three datasets with significant imaging differences demonstrate that our generalized pancreas segmentation results comparable to mainstream in-domain validation performance (Dice: 84.07%). More importantly, it successfully improves the results of the highly challenging cross-lesion generalized pancreatic cancer segmentation task by 9.51%. Thus, our model constitutes a resilient and efficient foundational technological support for pancreatic disease management and wider medical applications. The codes will be released at https://github.com/SJTUBME-QianLab/Dual-Task-Seg.