๐ค AI Summary
In domain adaptation (DA), source-target distribution shifts often degrade target-domain performance, necessitating rigorous theoretical characterization of algorithmic validity. This paper systematically establishes the theoretical roles of Conditional Invariant Components (CICs): (i) bounding target risk, (ii) diagnosing algorithmic failure, and (iii) mitigating feature confusion. Building on this, we propose IW-CIPโthe first method provably robust under concurrent covariate and label shiftโand enhance DIP into CIC-enhanced DIP, which withstands label-flipping-induced failure. Through theoretical risk bound analysis and extensive experiments across synthetic and real-world multi-source cross-domain benchmarks (MNIST, CelebA, Camelyon17, DomainNet), we demonstrate that CICs robustly guarantee generalization: they precisely identify and rectify performance degradation in mainstream DA methods, yielding substantial gains in accuracy and generalization stability.
๐ Abstract
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model. While many DA algorithms have demonstrated considerable empirical success, blindly applying these algorithms can often lead to worse performance on new datasets. To address this, it is crucial to clarify the assumptions under which a DA algorithm has good target performance. In this work, we focus on the assumption of the presence of conditionally invariant components (CICs), which are relevant for prediction and remain conditionally invariant across the source and target data. We demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing target risk guarantees in DA. First, we propose a new algorithm based on CICs, importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs help identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm can address its failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, Camelyon17, and DomainNet datasets.