Transferable Mask Transformer: Cross-domain Semantic Segmentation with Region-adaptive Transferability Estimation

πŸ“… 2025-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In cross-domain semantic segmentation, Vision Transformers (ViTs) suffer from degraded global attention due to distribution shift and struggle to adapt to spatially heterogeneous region-level transferability. To address this, we propose a region-level adaptive transferability modeling framework. Our key contributions are: (1) a novel dynamic region partitioning and transferability estimation mechanism grounded in semantic consistency; (2) a learnable transferability-aware masked attention module that enables structural awareness and spatially fine-grained cross-domain representation alignment; and (3) joint region-level domain adaptation with semantic uncertainty modeling. Evaluated on 20 cross-domain dataset pairs, our method achieves an average mIoU gain of +2.0% over fine-tuning and +1.28% over the state-of-the-art, significantly enhancing ViT’s cross-domain generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in Vision Transformers (ViTs) have set new benchmarks in semantic segmentation. However, when adapting pretrained ViTs to new target domains, significant performance degradation often occurs due to distribution shifts, resulting in suboptimal global attention. Since self-attention mechanisms are inherently data-driven, they may fail to effectively attend to key objects when source and target domains exhibit differences in texture, scale, or object co-occurrence patterns. While global and patch-level domain adaptation methods provide partial solutions, region-level adaptation with dynamically shaped regions is crucial due to spatial heterogeneity in transferability across different image areas. We present Transferable Mask Transformer (TMT), a novel region-level adaptation framework for semantic segmentation that aligns cross-domain representations through spatial transferability analysis. TMT consists of two key components: (1) An Adaptive Cluster-based Transferability Estimator (ACTE) that dynamically segments images into structurally and semantically coherent regions for localized transferability assessment, and (2) A Transferable Masked Attention (TMA) module that integrates region-specific transferability maps into ViTs' attention mechanisms, prioritizing adaptation in regions with low transferability and high semantic uncertainty. Comprehensive evaluations across 20 cross-domain pairs demonstrate TMT's superiority, achieving an average 2% MIoU improvement over vanilla fine-tuning and a 1.28% increase compared to state-of-the-art baselines. The source code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Address performance degradation in cross-domain semantic segmentation
Improve global attention in ViTs for domain shifts
Enable region-level adaptation with dynamic transferability estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Region-level adaptation with dynamic regions
Adaptive Cluster-based Transferability Estimator (ACTE)
Transferable Masked Attention (TMA) module
πŸ”Ž Similar Papers
No similar papers found.
E
Enming Zhang
Tsinghua Shenzhen International Graduate School, Tsinghua University
Zhengyu Li
Zhengyu Li
Peking University
Quantum Cryptography
Y
Yanru Wu
Tsinghua Shenzhen International Graduate School, Tsinghua University
Jingge Wang
Jingge Wang
Tsinghua University
Y
Yang Tan
Tsinghua Shenzhen International Graduate School, Tsinghua University
Ruizhe Zhao
Ruizhe Zhao
Research Engineer, Google DeepMind
G
Guan Wang
Hong Kong Polytechnic University
Y
Yang Li
Tsinghua Shenzhen International Graduate School, Tsinghua University