TransMedSeg: A Transferable Semantic Framework for Semi-Supervised Medical Image Segmentation

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-domain and cross-modality semantic knowledge transfer remains challenging in semi-supervised medical image segmentation due to domain shifts and modality heterogeneity. Method: We propose TransMedSeg, a transferable semantic framework centered on the novel Transferable Semantic Augmentation (TSA) module, which achieves implicit cross-domain semantic alignment without explicit data generation. We theoretically derive an upper bound on expected cross-entropy loss, enabling the first theory-driven semantic alignment optimization in semi-supervised learning (SSL). TransMedSeg integrates a teacher-student architecture, a lightweight memory module, cross-domain distribution matching, and intra-domain structural consistency preservation. Results: Evaluated on multi-center, multi-modality medical imaging benchmarks, TransMedSeg significantly outperforms state-of-the-art semi-supervised methods. It establishes a new paradigm for medical image representation learning that is both transferable across domains/modalities and label-efficient.

Technology Category

Application Category

📝 Abstract
Semi-supervised learning (SSL) has achieved significant progress in medical image segmentation (SSMIS) through effective utilization of limited labeled data. While current SSL methods for medical images predominantly rely on consistency regularization and pseudo-labeling, they often overlook transferable semantic relationships across different clinical domains and imaging modalities. To address this, we propose TransMedSeg, a novel transferable semantic framework for semi-supervised medical image segmentation. Our approach introduces a Transferable Semantic Augmentation (TSA) module, which implicitly enhances feature representations by aligning domain-invariant semantics through cross-domain distribution matching and intra-domain structural preservation. Specifically, TransMedSeg constructs a unified feature space where teacher network features are adaptively augmented towards student network semantics via a lightweight memory module, enabling implicit semantic transformation without explicit data generation. Interestingly, this augmentation is implicitly realized through an expected transferable cross-entropy loss computed over the augmented teacher distribution. An upper bound of the expected loss is theoretically derived and minimized during training, incurring negligible computational overhead. Extensive experiments on medical image datasets demonstrate that TransMedSeg outperforms existing semi-supervised methods, establishing a new direction for transferable representation learning in medical image analysis.
Problem

Research questions and friction points this paper is trying to address.

Enhancing semi-supervised medical image segmentation with transferable semantics
Addressing overlooked cross-domain semantic relationships in SSL methods
Improving feature representation via domain-invariant alignment and structural preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transferable Semantic Augmentation module enhances features
Aligns domain-invariant semantics via cross-domain matching
Implicit semantic transformation with lightweight memory module
🔎 Similar Papers
No similar papers found.
Mengzhu Wang
Mengzhu Wang
National University of Defense Technology
transfer learningcomputer vision
Jiao Li
Jiao Li
Columbia University
Applied MathMachine LearningFinanceClimate Change
S
Shanshan Wang
Anhui University
L
Long Lan
Peking University
H
Huibin Tan
Peking University
L
Liang Yang
Hebei University of Technology
G
Guoli Yang
Peking University