Joint semi-supervised and contrastive learning enables domain generalization and multi-domain segmentation

📅 2024-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe domain shift and scarce annotated data in cross-domain medical image segmentation, this paper proposes SegCLR—a semi-supervised framework that pioneers the integration of contrastive learning loss into 3D medical image segmentation training. It enables zero-shot domain adaptation and domain generalization without access to any target-domain data. SegCLR uniformly models multi-source labeled and unlabeled data (e.g., heterogeneous retinal OCT volumes), requires no target-domain unlabeled samples, and exhibits strong robustness to distributional shifts. Evaluated on three clinical OCT datasets, it matches the performance of fully supervised target-domain models and supports joint multi-domain training, substantially improving both in-domain and out-of-domain generalization. Its core innovation lies in end-to-end co-optimization of contrastive learning and segmentation tasks, establishing a scalable, low-dependency paradigm for cross-domain segmentation in medical imaging.

Technology Category

Application Category

📝 Abstract
Despite their effectiveness, current deep learning models face challenges with images coming from different domains with varying appearance and content. We introduce SegCLR, a versatile framework designed to segment images across different domains, employing supervised and contrastive learning simultaneously to effectively learn from both labeled and unlabeled data. We demonstrate the superior performance of SegCLR through a comprehensive evaluation involving three diverse clinical datasets of 3D retinal Optical Coherence Tomography (OCT) images, for the slice-wise segmentation of fluids with various network configurations and verification across 10 different network initializations. In an unsupervised domain adaptation context, SegCLR achieves results on par with a supervised upper-bound model trained on the intended target domain. Notably, we discover that the segmentation performance of SegCLR framework is marginally impacted by the abundance of unlabeled data from the target domain, thereby we also propose an effective domain generalization extension of SegCLR, known also as zero-shot domain adaptation, which eliminates the need for any target domain information. This shows that our proposed addition of contrastive loss in standard supervised training for segmentation leads to superior models, inherently more generalizable to both in- and out-of-domain test data. We additionally propose a pragmatic solution for SegCLR deployment in realistic scenarios with multiple domains containing labeled data. Accordingly, our framework pushes the boundaries of deep-learning based segmentation in multi-domain applications, regardless of data availability - labeled, unlabeled, or nonexistent.
Problem

Research questions and friction points this paper is trying to address.

Enhances domain generalization in image segmentation.
Integrates supervised and contrastive learning effectively.
Eliminates need for target domain data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines supervised and contrastive learning
Enhances domain generalization capabilities
Supports multi-domain segmentation effectively
🔎 Similar Papers
No similar papers found.
Alvaro Gomariz
Alvaro Gomariz
F. Hoffmann-La Roche AG, Basel, Switzerland
Y
Yusuke Kikuchi
Genentech Inc, California, United States
Y
Yun Yvonna Li
F. Hoffmann-La Roche AG, Basel, Switzerland
T
Thomas Albrecht
F. Hoffmann-La Roche AG, Basel, Switzerland
Andreas Maunz
Andreas Maunz
F. Hoffmann-La Roche AG, Basel, Switzerland
D
Daniela Ferrara
Genentech Inc, California, United States
Huanxiang Lu
Huanxiang Lu
F. Hoffmann-La Roche AG, Basel, Switzerland
O
Orcun Goksel
Department of Information Technology, Uppsala University, Uppsala, Sweden