🤖 AI Summary
This work addresses the issue of class fairness in unsupervised domain adaptation (UDA), where models often exhibit performance bias toward easily classified categories at the expense of harder ones. To mitigate this imbalance, we introduce Virtual Label Distribution-aware Learning (VILL)—the first framework to explicitly incorporate class fairness into UDA. VILL dynamically balances learning difficulty across classes through adaptive reweighting, KL divergence–driven decision boundary refinement, and modeling of virtual label distributions. Designed as a plug-and-play module, VILL can be seamlessly integrated into existing UDA methods. Extensive experiments on mainstream benchmarks demonstrate that our approach significantly improves worst-class accuracy while maintaining overall performance, thereby enhancing both model robustness and class-wise equity.
📝 Abstract
Unsupervised Domain Adaptation (UDA) aims to mitigate performance degradation when training and testing data are sampled from different distributions. While significant progress has been made in enhancing overall accuracy, most existing methods overlook performance disparities across categories-an issue we refer to as category fairness. Our empirical analysis reveals that UDA classifiers tend to favor certain easy categories while neglecting difficult ones. To address this, we propose Virtual Label-distribution-aware Learning (VILL), a simple yet effective framework designed to improve worst-case performance while preserving high overall accuracy. The core of VILL is an adaptive re-weighting strategy that amplifies the influence of hard-to-classify categories. Furthermore, we introduce a KL-divergence-based re-balancing strategy, which explicitly adjusts decision boundaries to enhance category fairness. Experiments on commonly used datasets demonstrate that VILL can be seamlessly integrated as a plug-and-play module into existing UDA methods, significantly improving category fairness.