🤖 AI Summary
In unsupervised domain adaptation (UDA), solely aligning source and target distributions while minimizing source empirical risk often neglects the discriminability of target features, limiting performance gains. This work establishes the information-theoretic necessity of explicitly enforcing discriminability constraints on the target domain and proposes RLGLC—a theoretically grounded framework jointly optimizing transferability and target discriminability. Methodologically, RLGLC introduces an Asymmetrically Relaxed Wasserstein Distance (AR-WWD) to mitigate class imbalance and incorporate semantic-dimension weighting, coupled with a local consistency mechanism to preserve fine-grained discriminative structures in the target domain. Extensive experiments on multiple benchmark datasets demonstrate consistent and significant improvements over state-of-the-art methods, validating both the effectiveness and generalizability of jointly modeling transferability and discriminability.
📝 Abstract
In this paper, we addressed the limitation of relying solely on distribution alignment and source-domain empirical risk minimization in Unsupervised Domain Adaptation (UDA). Our information-theoretic analysis showed that this standard adversarial-based framework neglects the discriminability of target-domain features, leading to suboptimal performance. To bridge this theoretical-practical gap, we defined"good representation learning"as guaranteeing both transferability and discriminability, and proved that an additional loss term targeting target-domain discriminability is necessary. Building on these insights, we proposed a novel adversarial-based UDA framework that explicitly integrates a domain alignment objective with a discriminability-enhancing constraint. Instantiated as Domain-Invariant Representation Learning with Global and Local Consistency (RLGLC), our method leverages Asymmetrically-Relaxed Wasserstein of Wasserstein Distance (AR-WWD) to address class imbalance and semantic dimension weighting, and employs a local consistency mechanism to preserve fine-grained target-domain discriminative information. Extensive experiments across multiple benchmark datasets demonstrate that RLGLC consistently surpasses state-of-the-art methods, confirming the value of our theoretical perspective and underscoring the necessity of enforcing both transferability and discriminability in adversarial-based UDA.