🤖 AI Summary
To address severe domain shift and scarce annotated data in cross-domain medical image segmentation, this paper proposes SegCLR—a semi-supervised framework that pioneers the integration of contrastive learning loss into 3D medical image segmentation training. It enables zero-shot domain adaptation and domain generalization without access to any target-domain data. SegCLR uniformly models multi-source labeled and unlabeled data (e.g., heterogeneous retinal OCT volumes), requires no target-domain unlabeled samples, and exhibits strong robustness to distributional shifts. Evaluated on three clinical OCT datasets, it matches the performance of fully supervised target-domain models and supports joint multi-domain training, substantially improving both in-domain and out-of-domain generalization. Its core innovation lies in end-to-end co-optimization of contrastive learning and segmentation tasks, establishing a scalable, low-dependency paradigm for cross-domain segmentation in medical imaging.
📝 Abstract
Despite their effectiveness, current deep learning models face challenges with images coming from different domains with varying appearance and content. We introduce SegCLR, a versatile framework designed to segment images across different domains, employing supervised and contrastive learning simultaneously to effectively learn from both labeled and unlabeled data. We demonstrate the superior performance of SegCLR through a comprehensive evaluation involving three diverse clinical datasets of 3D retinal Optical Coherence Tomography (OCT) images, for the slice-wise segmentation of fluids with various network configurations and verification across 10 different network initializations. In an unsupervised domain adaptation context, SegCLR achieves results on par with a supervised upper-bound model trained on the intended target domain. Notably, we discover that the segmentation performance of SegCLR framework is marginally impacted by the abundance of unlabeled data from the target domain, thereby we also propose an effective domain generalization extension of SegCLR, known also as zero-shot domain adaptation, which eliminates the need for any target domain information. This shows that our proposed addition of contrastive loss in standard supervised training for segmentation leads to superior models, inherently more generalizable to both in- and out-of-domain test data. We additionally propose a pragmatic solution for SegCLR deployment in realistic scenarios with multiple domains containing labeled data. Accordingly, our framework pushes the boundaries of deep-learning based segmentation in multi-domain applications, regardless of data availability - labeled, unlabeled, or nonexistent.