🤖 AI Summary
Existing source-free domain adaptation (SFDA) methods are constrained to specific settings—e.g., closed-set, open-set, biased-set, or generalized SFDA—and rely on target-domain priors, limiting their applicability and theoretical grounding.
Method: This work introduces Unified SFDA, the first formal problem formulation of SFDA that requires neither source data nor target-domain prior knowledge. From a causal perspective, it models the generative relationship between latent variables and decisions, proposing the Latent Causal Factor Discovery (LCFD) framework. LCFD integrates vision-language pretrained models (e.g., CLIP) with a causally motivated information bottleneck objective to achieve theoretically guaranteed representation disentanglement.
Contribution/Results: Unified SFDA establishes a general, prior-free SFDA paradigm. It achieves state-of-the-art performance across all major SFDA benchmarks and significantly improves out-of-distribution generalization, demonstrating robustness to unseen domain shifts without access to source data or target annotations.
📝 Abstract
In the pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored across various scenarios, including closed-set, open-set, partial-set, and generalized settings. Existing methods, focusing on specific scenarios, not only address only a subset of challenges but also necessitate prior knowledge of the target domain, significantly limiting their practical utility and deployability. In light of these considerations, we introduce a more practical yet challenging problem, termed unified SFDA, which comprehensively incorporates all specific scenarios in a unified manner. To tackle this unified SFDA problem, we propose a novel approach called Latent Causal Factors Discovery (LCFD). In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective. The objective is to uncover the causal relationships between latent variables and model decisions, enhancing the reliability and robustness of the learned model against domain shifts. To integrate extensive world knowledge, we leverage a pre-trained vision-language model such as CLIP. This aids in the formation and discovery of latent causal factors in the absence of supervision in the variation of distribution and semantics, coupled with a newly designed information bottleneck with theoretical guarantees. Extensive experiments demonstrate that LCFD can achieve new state-of-the-art results in distinct SFDA settings, as well as source-free out-of-distribution generalization.Our code and data are available at https://github.com/tntek/source-free-domain-adaptation.