Unsupervised Domain Adaptation via Similarity-based Prototypes for Cross-Modality Segmentation

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address semantic shift and category missing in unsupervised domain adaptation (UDA) for cross-modality medical image segmentation, this paper proposes a similarity-constrained class-prototype learning framework. Methodologically, it constructs an updateable prototype dictionary to explicitly model source-domain class prototypes in deep embedding space, and employs similarity-driven prototype alignment alongside prototype-level contrastive learning to narrow the semantic gap between source and target domains while mitigating category collapse induced by target-domain label absence. The key innovation lies in the organic integration of prototype representation, dynamic dictionary updating, and cross-domain contrastive learning—enabling robust, annotation-free segmentation on the target domain. Extensive experiments on multiple cross-modality benchmarks (e.g., MRI→CT, T1→T2) demonstrate consistent and significant improvements over state-of-the-art methods, achieving average Dice score gains of 3.2–5.8 percentage points.

Technology Category

Application Category

📝 Abstract
Deep learning models have achieved great success on various vision challenges, but a well-trained model would face drastic performance degradation when applied to unseen data. Since the model is sensitive to domain shift, unsupervised domain adaptation attempts to reduce the domain gap and avoid costly annotation of unseen domains. This paper proposes a novel framework for cross-modality segmentation via similarity-based prototypes. In specific, we learn class-wise prototypes within an embedding space, then introduce a similarity constraint to make these prototypes representative for each semantic class while separable from different classes. Moreover, we use dictionaries to store prototypes extracted from different images, which prevents the class-missing problem and enables the contrastive learning of prototypes, and further improves performance. Extensive experiments show that our method achieves better results than other state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shift in cross-modality medical image segmentation
Learns separable class prototypes via similarity constraints
Uses dictionary-based contrastive learning to prevent class-missing issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses similarity-based prototypes for domain adaptation
Learns class-wise prototypes in embedding space
Employs dictionaries for contrastive prototype learning
🔎 Similar Papers
No similar papers found.
Z
Ziyu Ye
Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China
Chen Ju
Chen Ju
Alibaba Group, Shanghai Jiao Tong University
Multi-Modal LearningAIGCData GovernanceVideo Understanding
C
Chaofan Ma
Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China
X
Xiaoyun Zhang
Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, China