Prototypical Distillation and Debiased Tuning for Black-box Unsupervised Domain Adaptation

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses black-box unsupervised domain adaptation (BUDA), a novel setting where only API-accessible predictions—including class labels and confidence scores—from a fixed source model are available, with neither source data nor model parameters exposed. We formally define this paradigm and propose ProDDing, a two-stage framework. First, prototype-guided knowledge distillation enables cross-domain knowledge transfer by aligning target prototypes with source-class distributions. Second, confidence-weighted soft-label learning jointly optimizes target predictions while logit-level class debiasing regularization mitigates prediction bias induced by domain shift. Additionally, a hard-label robustness extension enhances generalization under label-noise-prone scenarios. Extensive experiments across multiple benchmarks demonstrate that ProDDing significantly outperforms existing black-box UDA methods, maintaining strong robustness and superior performance even under purely hard-label supervision.

Technology Category

Application Category

📝 Abstract
Unsupervised domain adaptation aims to transfer knowledge from a related, label-rich source domain to an unlabeled target domain, thereby circumventing the high costs associated with manual annotation. Recently, there has been growing interest in source-free domain adaptation, a paradigm in which only a pre-trained model, rather than the labeled source data, is provided to the target domain. Given the potential risk of source data leakage via model inversion attacks, this paper introduces a novel setting called black-box domain adaptation, where the source model is accessible only through an API that provides the predicted label along with the corresponding confidence value for each query. We develop a two-step framework named $ extbf{Pro}$totypical $ extbf{D}$istillation and $ extbf{D}$ebiased tun$ extbf{ing}$ ($ extbf{ProDDing}$). In the first step, ProDDing leverages both the raw predictions from the source model and prototypes derived from the target domain as teachers to distill a customized target model. In the second step, ProDDing keeps fine-tuning the distilled model by penalizing logits that are biased toward certain classes. Empirical results across multiple benchmarks demonstrate that ProDDing outperforms existing black-box domain adaptation methods. Moreover, in the case of hard-label black-box domain adaptation, where only predicted labels are available, ProDDing achieves significant improvements over these methods. Code will be available at url{https://github.com/tim-learn/ProDDing/}.
Problem

Research questions and friction points this paper is trying to address.

Black-box Domain Adaptation
Knowledge Transfer
Unsupervised Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box Domain Adaptation
ProDDing Framework
Prototypical Distillation
🔎 Similar Papers
No similar papers found.
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning
Lijun Sheng
Lijun Sheng
University of Science and Technology of China
computer visionmodel adaptation
Hongmin Liu
Hongmin Liu
School of Intelligence Science and Technology, University of Science and Technology Beijing
R
Ran He
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences and the School of Artificial Intelligence, University of Chinese Academy of Sciences