🤖 AI Summary
This paper addresses the more realistic cross-domain semi-supervised domain generalization (CD-SSDG) problem in medical image segmentation: during training, labeled and unlabeled data originate from distinct source domains, while the test domain remains entirely unknown—exacerbating both pseudo-label noise and domain shift. To tackle this, we propose a dual-supervised asymmetric co-training framework that jointly optimizes feature-level supervised signals with two divergent self-supervised tasks across parallel branches. This design explicitly mitigates pseudo-label errors and strengthens learning of domain-invariant features. The method requires no prior knowledge of the target domain and seamlessly integrates standard semi-supervised learning and domain generalization paradigms. Extensive experiments on multi-center medical datasets—including fundus, polyp, and spinal cord gray matter (SCGM)—demonstrate substantial improvements in cross-domain segmentation robustness and generalization performance. Our approach offers a promising pathway for clinical deployment under low-labeling-cost constraints.
📝 Abstract
Semi-supervised domain generalization (SSDG) in medical image segmentation offers a promising solution for generalizing to unseen domains during testing, addressing domain shift challenges and minimizing annotation costs. However, conventional SSDG methods assume labeled and unlabeled data are available for each source domain in the training set, a condition that is not always met in practice. The coexistence of limited annotation and domain shift in the training set is a prevalent issue. Thus, this paper explores a more practical and challenging scenario, cross-domain semi-supervised domain generalization (CD-SSDG), where domain shifts occur between labeled and unlabeled training data, in addition to shifts between training and testing sets. Existing SSDG methods exhibit sub-optimal performance under such domain shifts because of inaccurate pseudolabels. To address this issue, we propose a novel dual-supervised asymmetric co-training (DAC) framework tailored for CD-SSDG. Building upon the co-training paradigm with two sub-models offering cross pseudo supervision, our DAC framework integrates extra feature-level supervision and asymmetric auxiliary tasks for each sub-model. This feature-level supervision serves to address inaccurate pseudo supervision caused by domain shifts between labeled and unlabeled data, utilizing complementary supervision from the rich feature space. Additionally, two distinct auxiliary self-supervised tasks are integrated into each sub-model to enhance domain-invariant discriminative feature learning and prevent model collapse. Extensive experiments on real-world medical image segmentation datasets, i.e., Fundus, Polyp, and SCGM, demonstrate the robust generalizability of the proposed DAC framework.