Conditional Support Alignment for Domain Adaptation with Label Shift

๐Ÿ“… 2023-05-29
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Unsupervised domain adaptation (UDA) under label distribution shift suffers from performance degradation, as conventional methods rely on the covariate shift assumption and fail to guarantee discriminative domain-invariant features. Method: This paper proposes Conditional Adversarial Support Alignment (CASUAL), a theoretically grounded framework that jointly optimizes support alignment of conditional feature distributions and classification discriminability via conditional adversarial training. Contribution/Results: CASUAL introduces the first theoretical analysis of conditional support alignment and derives a tighter upper bound on the target riskโ€”rigorously proving its superiority over classical marginal alignment. Extensive experiments across multiple UDA benchmarks with label shift demonstrate that CASUAL consistently outperforms state-of-the-art methods, validating both the effectiveness and generalizability of theory-driven alignment strategies.
๐Ÿ“ Abstract
Unsupervised domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on the labeled samples on the source domain and unlabeled ones in the target domain. The dominant existing methods in the field that rely on the classical covariate shift assumption to learn domain-invariant feature representation have yielded suboptimal performance under label distribution shift. In this paper, we propose a novel Conditional Adversarial SUpport ALignment (CASUAL) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more discriminative representation for the classification task. We also introduce a novel theoretical target risk bound, which justifies the merits of aligning the supports of conditional feature distributions compared to the existing marginal support alignment approach in the UDA settings. We then provide a complete training process for learning in which the objective optimization functions are precisely based on the proposed target risk bound. Our empirical results demonstrate that CASUAL outperforms other state-of-the-art methods on different UDA benchmark tasks under different label shift conditions.
Problem

Research questions and friction points this paper is trying to address.

Addresses label distribution shift in unsupervised domain adaptation
Minimizes conditional support divergence between source and target domains
Improves discriminative representation for classification under label shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional adversarial support alignment method
Minimizes conditional symmetric support divergence
Novel theoretical target risk bound optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
A
A. Nguyen
University of Illinois Chicago
Lam C. Tran
Lam C. Tran
VinAI Research
Anh Tong
Anh Tong
Korea University
Bayesian InferenceGaussian ProcessesNeural Differential Equations
T
Tuan-Duy H. Nguyen
National University of Singapore
T
Toan Tran
VinAI Research