Distributionally Robust Classification for Multi-source Unsupervised Domain Adaptation

📅 2026-01-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation of generalization performance in unsupervised domain adaptation caused by distributional shifts between source and target domains, scarcity of target-domain data, and spurious correlations in the source domain. To this end, the authors propose a distributionally robust learning framework that, for the first time in this setting, jointly models uncertainty in both the covariate distribution and the conditional label distribution. The approach is applicable to both single-source and multi-source scenarios and can be seamlessly integrated with existing unsupervised domain adaptation algorithms. By leveraging distributionally robust optimization, the method enhances model robustness and generalization under extreme target-data scarcity and the presence of spurious correlations. Extensive experiments demonstrate that the proposed framework consistently outperforms strong baselines across various distribution shift settings, maintaining superior performance even when only a few target samples are available.

Technology Category

Application Category

📝 Abstract
Unsupervised domain adaptation (UDA) is a statistical learning problem when the distribution of training (source) data is different from that of test (target) data. In this setting, one has access to labeled data only from the source domain and unlabeled data from the target domain. The central objective is to leverage the source data and the unlabeled target data to build models that generalize to the target domain. Despite its potential, existing UDA approaches often struggle in practice, particularly in scenarios where the target domain offers only limited unlabeled data or spurious correlations dominate the source domain. To address these challenges, we propose a novel distributionally robust learning framework that models uncertainty in both the covariate distribution and the conditional label distribution. Our approach is motivated by the multi-source domain adaptation setting but is also directly applicable to the single-source scenario, making it versatile in practice. We develop an efficient learning algorithm that can be seamlessly integrated with existing UDA methods. Extensive experiments under various distribution shift scenarios show that our method consistently outperforms strong baselines, especially when target data are extremely scarce.
Problem

Research questions and friction points this paper is trying to address.

unsupervised domain adaptation
distribution shift
multi-source
distributional robustness
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally Robust Learning
Unsupervised Domain Adaptation
Multi-source Adaptation
Covariate Distribution
Conditional Label Distribution
🔎 Similar Papers
No similar papers found.
S
Seonghwi Kim
Pohang University of Science and Technology
S
Sung Ho Jo
Pohang University of Science and Technology
Wooseok Ha
Wooseok Ha
KAIST
StatisticsMachine learningOptimization
Minwoo Chae
Minwoo Chae
POSTECH
Bayesian InferenceDeep LearningDistributionally Robust InferenceMathematical Statistics