π€ AI Summary
Class-incremental medical image segmentation (CIMIS) suffers from severe catastrophic forgetting and lacks supervision from old-class labels. Existing methods employ uniform spatial/channel-wise distillation, ignoring regional and channel-wise importance variations; moreover, they only align global prototypes of old classes, neglecting local representations of old classes within new dataβleading to structural degradation. This paper proposes a prototype-guided calibration and dual-alignment distillation framework: (i) dynamically calibrating distillation strength per spatial region and channel based on prototype-feature similarity; and (ii) jointly performing global and local prototype alignment to preserve the structural integrity of old-class features. Evaluated on a multi-organ incremental segmentation benchmark, our method significantly outperforms state-of-the-art approaches, achieving more balanced performance across old and new classes, with enhanced generalization and robustness.
π Abstract
Class incremental medical image segmentation (CIMIS) aims to preserve knowledge of previously learned classes while learning new ones without relying on old-class labels. However, existing methods 1) either adopt one-size-fits-all strategies that treat all spatial regions and feature channels equally, which may hinder the preservation of accurate old knowledge, 2) or focus solely on aligning local prototypes with global ones for old classes while overlooking their local representations in new data, leading to knowledge degradation. To mitigate the above issues, we propose Prototype-Guided Calibration Distillation (PGCD) and Dual-Aligned Prototype Distillation (DAPD) for CIMIS in this paper. Specifically, PGCD exploits prototype-to-feature similarity to calibrate class-specific distillation intensity in different spatial regions, effectively reinforcing reliable old knowledge and suppressing misleading information from old classes. Complementarily, DAPD aligns the local prototypes of old classes extracted from the current model with both global prototypes and local prototypes, further enhancing segmentation performance on old categories. Comprehensive evaluations on two widely used multi-organ segmentation benchmarks demonstrate that our method outperforms state-of-the-art methods, highlighting its robustness and generalization capabilities.