Class-based Subset Selection for Transfer Learning under Extreme Label Shift

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Under extreme label shift—where source and target domains exhibit highly inconsistent (even disjoint) label spaces—model generalization degrades sharply. To address this, we propose a Wasserstein-distance-based weighted transfer learning framework. Our method innovatively employs the Wasserstein metric to jointly guide source-class subset selection and dynamic instance reweighting, thereby leveraging semantically relevant yet non-overlapping source classes—breaking the conventional reliance solely on shared labels. Theoretically, we derive a tight generalization error upper bound, establishing robustness guarantees under label shift. Empirically, our approach achieves significant improvements over state-of-the-art methods across multiple benchmark datasets, notably maintaining strong generalization even in the most challenging setting where source and target label spaces are completely disjoint.

Technology Category

Application Category

📝 Abstract
Existing work within transfer learning often follows a two-step process -- pre-training over a large-scale source domain and then finetuning over limited samples from the target domain. Yet, despite its popularity, this methodology has been shown to suffer in the presence of distributional shift -- specifically when the output spaces diverge. Previous work has focused on increasing model performance within this setting by identifying and classifying only the shared output classes between distributions. However, these methods are inherently limited as they ignore classes outside the shared class set, disregarding potential information relevant to the model transfer. This paper proposes a new process for few-shot transfer learning that selects and weighs classes from the source domain to optimize the transfer between domains. More concretely, we use Wasserstein distance to choose a set of source classes and their weights that minimize the distance between the source and target domain. To justify our proposed algorithm, we provide a generalization analysis of the performance of the learned classifier over the target domain and show that our method corresponds to a bound minimization algorithm. We empirically demonstrate the effectiveness of our approach (WaSS) by experimenting on several different datasets and presenting superior performance within various label shift settings, including the extreme case where the label spaces are disjoint.
Problem

Research questions and friction points this paper is trying to address.

Domain Adaptation
Label Shift
Knowledge Transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wasserstein Distance
Domain Adaptation
Label Discrepancy
🔎 Similar Papers
No similar papers found.
A
Akul Goyal
University of Illinois Urbana-Champaign, Department of Computer Science
Carl Edwards
Carl Edwards
Senior AI Scientist, Genentech
natural language processinginformation extractionchemistrydrug discoveryAI4Science