🤖 AI Summary
This work addresses the problem of decentralized multi-source domain adaptation in heterogeneous environments without a central server, aiming to transfer knowledge from multiple labeled source domains to an unlabeled target domain while preserving data privacy, ensuring system robustness, and supporting scalability. We propose the first fully decentralized federated dataset dictionary learning framework. Our method is the first to incorporate the Wasserstein barycenter into a decentralized setting to explicitly model cross-domain distribution shifts, and integrates distributed optimization with domain-invariant feature alignment. Experiments demonstrate that our approach significantly outperforms both federated and state-of-the-art decentralized baselines on multi-source domain adaptation tasks. It achieves stable convergence, incurs low communication overhead, and eliminates single-point-of-failure risks entirely.
📝 Abstract
Decentralized Multi-Source Domain Adaptation (DMSDA) is a challenging task that aims to transfer knowledge from multiple related and heterogeneous source domains to an unlabeled target domain within a decentralized framework. Our work tackles DMSDA through a fully decentralized federated approach. In particular, we extend the Federated Dataset Dictionary Learning (FedDaDiL) framework by eliminating the necessity for a central server. FedDaDiL leverages Wasserstein barycenters to model the distributional shift across multiple clients, enabling effective adaptation while preserving data privacy. By decentralizing this framework, we enhance its robustness, scalability, and privacy, removing the risk of a single point of failure. We compare our method to its federated counterpart and other benchmark algorithms, showing that our approach effectively adapts source domains to an unlabeled target domain in a fully decentralized manner.