π€ AI Summary
In cross-domain semantic segmentation, Vision Transformers (ViTs) suffer from degraded global attention due to distribution shift and struggle to adapt to spatially heterogeneous region-level transferability. To address this, we propose a region-level adaptive transferability modeling framework. Our key contributions are: (1) a novel dynamic region partitioning and transferability estimation mechanism grounded in semantic consistency; (2) a learnable transferability-aware masked attention module that enables structural awareness and spatially fine-grained cross-domain representation alignment; and (3) joint region-level domain adaptation with semantic uncertainty modeling. Evaluated on 20 cross-domain dataset pairs, our method achieves an average mIoU gain of +2.0% over fine-tuning and +1.28% over the state-of-the-art, significantly enhancing ViTβs cross-domain generalization capability.
π Abstract
Recent advances in Vision Transformers (ViTs) have set new benchmarks in semantic segmentation. However, when adapting pretrained ViTs to new target domains, significant performance degradation often occurs due to distribution shifts, resulting in suboptimal global attention. Since self-attention mechanisms are inherently data-driven, they may fail to effectively attend to key objects when source and target domains exhibit differences in texture, scale, or object co-occurrence patterns. While global and patch-level domain adaptation methods provide partial solutions, region-level adaptation with dynamically shaped regions is crucial due to spatial heterogeneity in transferability across different image areas. We present Transferable Mask Transformer (TMT), a novel region-level adaptation framework for semantic segmentation that aligns cross-domain representations through spatial transferability analysis. TMT consists of two key components: (1) An Adaptive Cluster-based Transferability Estimator (ACTE) that dynamically segments images into structurally and semantically coherent regions for localized transferability assessment, and (2) A Transferable Masked Attention (TMA) module that integrates region-specific transferability maps into ViTs' attention mechanisms, prioritizing adaptation in regions with low transferability and high semantic uncertainty. Comprehensive evaluations across 20 cross-domain pairs demonstrate TMT's superiority, achieving an average 2% MIoU improvement over vanilla fine-tuning and a 1.28% increase compared to state-of-the-art baselines. The source code will be publicly available.