🤖 AI Summary
Cross-domain and cross-modality semantic knowledge transfer remains challenging in semi-supervised medical image segmentation due to domain shifts and modality heterogeneity. Method: We propose TransMedSeg, a transferable semantic framework centered on the novel Transferable Semantic Augmentation (TSA) module, which achieves implicit cross-domain semantic alignment without explicit data generation. We theoretically derive an upper bound on expected cross-entropy loss, enabling the first theory-driven semantic alignment optimization in semi-supervised learning (SSL). TransMedSeg integrates a teacher-student architecture, a lightweight memory module, cross-domain distribution matching, and intra-domain structural consistency preservation. Results: Evaluated on multi-center, multi-modality medical imaging benchmarks, TransMedSeg significantly outperforms state-of-the-art semi-supervised methods. It establishes a new paradigm for medical image representation learning that is both transferable across domains/modalities and label-efficient.
📝 Abstract
Semi-supervised learning (SSL) has achieved significant progress in medical image segmentation (SSMIS) through effective utilization of limited labeled data. While current SSL methods for medical images predominantly rely on consistency regularization and pseudo-labeling, they often overlook transferable semantic relationships across different clinical domains and imaging modalities. To address this, we propose TransMedSeg, a novel transferable semantic framework for semi-supervised medical image segmentation. Our approach introduces a Transferable Semantic Augmentation (TSA) module, which implicitly enhances feature representations by aligning domain-invariant semantics through cross-domain distribution matching and intra-domain structural preservation. Specifically, TransMedSeg constructs a unified feature space where teacher network features are adaptively augmented towards student network semantics via a lightweight memory module, enabling implicit semantic transformation without explicit data generation. Interestingly, this augmentation is implicitly realized through an expected transferable cross-entropy loss computed over the augmented teacher distribution. An upper bound of the expected loss is theoretically derived and minimized during training, incurring negligible computational overhead. Extensive experiments on medical image datasets demonstrate that TransMedSeg outperforms existing semi-supervised methods, establishing a new direction for transferable representation learning in medical image analysis.