Privacy-Aware Continual Self-Supervised Learning on Multi-Window Chest Computed Tomography for Domain-Shift Robustness

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address domain shift induced by multi-window settings in chest CT imaging and the scarcity of annotated data, this paper proposes the first privacy-preserving continual self-supervised learning framework. To mitigate privacy risks associated with data reuse, the framework introduces a latent-space replay mechanism to alleviate catastrophic forgetting and designs a Wasserstein-distance-driven knowledge distillation strategy coupled with batch-wise knowledge integration for cross-window feature alignment and robust representation learning. The method operates exclusively on streams of unlabeled images for continual pretraining, enabling knowledge accumulation and transfer in dynamic clinical environments. Experiments on dual-window (lung/mediastinal) chest CT data demonstrate that our approach significantly outperforms existing continual learning and self-supervised baselines, maintaining strong generalization under domain shift while strictly preserving data privacy.

Technology Category

Application Category

📝 Abstract
We propose a novel continual self-supervised learning (CSSL) framework for simultaneously learning diverse features from multi-window-obtained chest computed tomography (CT) images and ensuring data privacy. Achieving a robust and highly generalizable model in medical image diagnosis is challenging, mainly because of issues, such as the scarcity of large-scale, accurately annotated datasets and domain shifts inherent to dynamic healthcare environments. Specifically, in chest CT, these domain shifts often arise from differences in window settings, which are optimized for distinct clinical purposes. Previous CSSL frameworks often mitigated domain shift by reusing past data, a typically impractical approach owing to privacy constraints. Our approach addresses these challenges by effectively capturing the relationship between previously learned knowledge and new information across different training stages through continual pretraining on unlabeled images. Specifically, by incorporating a latent replay-based mechanism into CSSL, our method mitigates catastrophic forgetting due to domain shifts during continual pretraining while ensuring data privacy. Additionally, we introduce a feature distillation technique that integrates Wasserstein distance-based knowledge distillation (WKD) and batch-knowledge ensemble (BKE), enhancing the ability of the model to learn meaningful, domain-shift-robust representations. Finally, we validate our approach using chest CT images obtained across two different window settings, demonstrating superior performance compared with other approaches.
Problem

Research questions and friction points this paper is trying to address.

Addressing domain shifts in chest CT images from different window settings
Ensuring data privacy in continual learning by avoiding past data reuse
Mitigating catastrophic forgetting during continual pretraining on unlabeled images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent replay mechanism ensures privacy in continual learning
Wasserstein distance distillation enhances domain shift robustness
Batch-knowledge ensemble integrates multi-stage feature representations
🔎 Similar Papers
No similar papers found.