Unified Source-Free Domain Adaptation

📅 2024-03-12
🏛️ arXiv.org
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
Existing source-free domain adaptation (SFDA) methods are constrained to specific settings—e.g., closed-set, open-set, biased-set, or generalized SFDA—and rely on target-domain priors, limiting their applicability and theoretical grounding. Method: This work introduces Unified SFDA, the first formal problem formulation of SFDA that requires neither source data nor target-domain prior knowledge. From a causal perspective, it models the generative relationship between latent variables and decisions, proposing the Latent Causal Factor Discovery (LCFD) framework. LCFD integrates vision-language pretrained models (e.g., CLIP) with a causally motivated information bottleneck objective to achieve theoretically guaranteed representation disentanglement. Contribution/Results: Unified SFDA establishes a general, prior-free SFDA paradigm. It achieves state-of-the-art performance across all major SFDA benchmarks and significantly improves out-of-distribution generalization, demonstrating robustness to unseen domain shifts without access to source data or target annotations.

Technology Category

Application Category

📝 Abstract
In the pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored across various scenarios, including closed-set, open-set, partial-set, and generalized settings. Existing methods, focusing on specific scenarios, not only address only a subset of challenges but also necessitate prior knowledge of the target domain, significantly limiting their practical utility and deployability. In light of these considerations, we introduce a more practical yet challenging problem, termed unified SFDA, which comprehensively incorporates all specific scenarios in a unified manner. To tackle this unified SFDA problem, we propose a novel approach called Latent Causal Factors Discovery (LCFD). In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective. The objective is to uncover the causal relationships between latent variables and model decisions, enhancing the reliability and robustness of the learned model against domain shifts. To integrate extensive world knowledge, we leverage a pre-trained vision-language model such as CLIP. This aids in the formation and discovery of latent causal factors in the absence of supervision in the variation of distribution and semantics, coupled with a newly designed information bottleneck with theoretical guarantees. Extensive experiments demonstrate that LCFD can achieve new state-of-the-art results in distinct SFDA settings, as well as source-free out-of-distribution generalization.Our code and data are available at https://github.com/tntek/source-free-domain-adaptation.
Problem

Research questions and friction points this paper is trying to address.

Unifying diverse source-free domain adaptation scenarios
Discovering latent causal factors for model decisions
Enhancing model robustness against domain shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent causal factors discovery for domain adaptation
Leveraging pre-trained vision-language models like CLIP
Information bottleneck with theoretical guarantees design
🔎 Similar Papers
No similar papers found.
S
Song Tang
Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China; TAMS Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
W
Wenxin Su
Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
M
Mao Ye
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
J
Jianwei Zhang
TAMS Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
Xiatian Zhu
Xiatian Zhu
University of Surrey
Machine LearningComputer Vision