One-Shot Federated Unsupervised Domain Adaptation with Scaled Entropy Attention and Multi-Source Smoothed Pseudo Labeling

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenging unsupervised federated domain adaptation (FUDA) setting where clients possess only source-domain data, data sharing is prohibited, and only a single round of communication is allowed. We propose the first single-round FUDA framework. Our key contributions are: (1) Scaled Entropy Attention (SEA), a target-domain prediction entropy-based model aggregation mechanism that enhances robustness; (2) Multi-Source Smoothed Pseudo Labeling (MSPL), which mitigates pseudo-label noise through cross-client consistency and label smoothing; and (3) a lightweight federated optimization combining soft-label cross-entropy loss with efficient server-side aggregation. Extensive experiments on four standard benchmarks demonstrate significant improvements over state-of-the-art methods. The framework reduces communication overhead to 1/N of conventional federated learning and achieves substantial computational efficiency gains. Code will be made publicly available.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is a promising approach for privacy-preserving collaborative learning. However, it faces significant challenges when dealing with domain shifts, especially when each client has access only to its source data and cannot share it during target domain adaptation. Moreover, FL methods often require high communication overhead due to multiple rounds of model updates between clients and the server. We propose a one-shot Federated Unsupervised Domain Adaptation (FUDA) method to address these limitations. Specifically, we introduce Scaled Entropy Attention (SEA) for model aggregation and Multi-Source Pseudo Labeling (MSPL) for target domain adaptation. SEA uses scaled prediction entropy on target domain to assign higher attention to reliable models. This improves the global model quality and ensures balanced weighting of contributions. MSPL distills knowledge from multiple source models to generate pseudo labels and manage noisy labels using smoothed soft-label cross-entropy (SSCE). Our approach outperforms state-of-the-art methods across four standard benchmarks while reducing communication and computation costs, making it highly suitable for real-world applications. The implementation code will be made publicly available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain shifts in Federated Learning with privacy constraints.
Reduces communication overhead in Federated Learning model updates.
Improves global model quality using entropy-based attention and pseudo labeling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-shot Federated Unsupervised Domain Adaptation
Scaled Entropy Attention for model aggregation
Multi-Source Smoothed Pseudo Labeling for adaptation
🔎 Similar Papers
No similar papers found.
A
Ali Abedi
University of Windsor
Q
Q. M. Jonathan Wu
University of Windsor
N
Ning Zhang
University of Windsor
Farhad Pourpanah
Farhad Pourpanah
SMIEEE, Queen's University
Machine learningComputational IntelligenceArtificial Intelligence