🤖 AI Summary
This paper addresses the performance collapse of partial domain matching (PDM) methods in universal domain adaptation (UniDA) under severe target-domain class scarcity—e.g., only 5% shared classes—identifying “dimensional collapse” as the root cause: target representations degenerate into a low-dimensional manifold, eroding discriminative structure. To mitigate this, we propose a self-supervised framework jointly enforcing contrastive alignment and uniformity regularization. Specifically, contrastive loss operates on unlabeled target data to achieve cross-domain semantic alignment; uniformity regularization preserves feature-space coverage; and shared-class discriminative features are explicitly decoupled. Evaluated across UniDA benchmarks with varying shared-class ratios, our method establishes new state-of-the-art performance, significantly improving transfer accuracy—especially under extreme scarcity.
📝 Abstract
Universal Domain Adaptation (UniDA) addresses unsupervised domain adaptation where target classes may differ arbitrarily from source ones, except for a shared subset. An important approach, partial domain matching (PDM), aligns only shared classes but struggles in extreme cases where many source classes are absent in the target domain, underperforming the most naive baseline that trains on only source data. In this work, we identify that the failure of PDM for extreme UniDA stems from dimensional collapse (DC) in target representations. To address target DC, we propose to jointly leverage the alignment and uniformity techniques in modern self-supervised learning (SSL) on the unlabeled target data to preserve the intrinsic structure of the learned representations. Our experimental results confirm that SSL consistently advances PDM and delivers new state-of-the-art results across a broader benchmark of UniDA scenarios with different portions of shared classes, representing a crucial step toward truly comprehensive UniDA.