🤖 AI Summary
To address catastrophic forgetting and data interference in continual self-supervised learning (CSSL) of chest CT images, this paper proposes the first CSSL framework integrating domain-specific medical prior knowledge. Methodologically: (1) anatomical structures and pathological semantics are explicitly embedded into the contrastive learning objective for the first time; (2) Dynamic Experience Replay (DER) is enhanced via joint diversity-aware sampling and representative gradient-based selection; (3) MixUp augmentation is coupled with cross-task feature distillation to improve representation robustness. Evaluated under a dual-modality (non-contrast/enhanced) CT continual learning setting, our approach outperforms state-of-the-art methods, achieving average improvements of +3.2% mAP on downstream classification and +2.8% Dice score on segmentation tasks during online pretraining. The learned representations exhibit superior generalizability and clinical interpretability, enabling more reliable and semantically grounded medical image understanding.
📝 Abstract
We propose a novel continual self-supervised learning method (CSSL) considering medical domain knowledge in chest CT images. Our approach addresses the challenge of sequential learning by effectively capturing the relationship between previously learned knowledge and new information at different stages. By incorporating an enhanced DER into CSSL and maintaining both diversity and representativeness within the rehearsal buffer of DER, the risk of data interference during pretraining is reduced, enabling the model to learn more richer and robust feature representations. In addition, we incorporate a mixup strategy and feature distillation to further enhance the model's ability to learn meaningful representations. We validate our method using chest CT images obtained under two different imaging conditions, demonstrating superior performance compared to state-of-the-art methods.