π€ AI Summary
This work addresses the challenges of scarce annotations, unknown domain labels in multi-center medical imaging data, and severe domain shift by proposing a domain-invariant hybrid-domain semi-supervised segmentation framework. Without requiring explicit domain labels, the method enhances data diversity through a copy-paste augmentation strategy and introduces a Clustering-based Maximum Mean Discrepancy (CMMD) module within a teacherβstudent architecture to align the feature distributions of unlabeled samples with those of labeled anchor points, thereby enabling domain-invariant representation learning. Experiments on the Fundus and M&Ms datasets demonstrate that the proposed approach significantly outperforms existing semi-supervised and domain adaptation methods, achieving high-accuracy and robust segmentation performance.
π Abstract
Deep learning has shown remarkable progress in medical image semantic segmentation, yet its success heavily depends on large-scale expert annotations and consistent data distributions. In practice, annotations are scarce, and images are collected from multiple scanners or centers, leading to mixed-domain settings with unknown domain labels and severe domain gaps. Existing semi-supervised or domain adaptation approaches typically assume either a single domain shift or access to explicit domain indices, which rarely hold in real-world deployment. In this paper, we propose a domain-invariant mixed-domain semi-supervised segmentation framework that jointly enhances data diversity and mitigates domain bias. A Copy-Paste Mechanism (CPM) augments the training set by transferring informative regions across domains, while a Cluster Maximum Mean Discrepancy (CMMD) block clusters unlabeled features and aligns them with labeled anchors via an MMD objective, encouraging domain-invariant representations. Integrated within a teacher-student framework, our method achieves robust and precise segmentation even with very few labeled examples and multiple unknown domain discrepancies. Experiments on Fundus and M&Ms benchmarks demonstrate that our approach consistently surpasses semi-supervised and domain adaptation methods, establishing a potential solution for mixed-domain semi-supervised medical image segmentation.