Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models

📅 2024-05-02
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of data heterogeneity and differential privacy (DP) preservation in single-round federated learning (FL), this paper proposes FedDiff—a novel framework that introduces diffusion models to single-round FL for the first time. FedDiff generates high-fidelity synthetic data to align heterogeneous client data distributions, thereby mitigating the non-IID problem. Furthermore, it incorporates a Fourier Magnitude Filtering (FMF) mechanism that significantly enhances the fidelity of synthetic samples under DP noise. Experimental results demonstrate that FedDiff outperforms state-of-the-art single-round FL methods even under stringent DP constraints, achieving substantial improvements in model accuracy. Crucially, FedDiff requires only one round of communication, reducing communication overhead to 1/N of conventional multi-round FL. Thus, it simultaneously achieves strong privacy guarantees, exceptional communication efficiency, and competitive model performance.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy. However, FL faces challenges in terms of communication cost and data heterogeneity. One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks. Nevertheless, data heterogeneity remains a significant challenge, impacting performance. This work explores the effectiveness of diffusion models in one-shot FL, demonstrating their applicability in addressing data heterogeneity and improving FL performance. Additionally, we investigate the utility of our diffusion model approach, FedDiff, compared to other one-shot FL methods under differential privacy (DP). Furthermore, to improve generated sample quality under DP settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method, enhancing the effectiveness of generated data for global model training.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Data Diversity
Privacy Protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Models
Fourier Amplitude Filtering
Federated Learning with Data Diversity
🔎 Similar Papers
M
Mat'ias Mendieta
Center for Research in Computer Vision, University of Central Florida, USA
Guangyu Sun
Guangyu Sun
School of Integrated Circuits, Peking University
Computer ArchitectureDesign AutomationEmerging Memory
C
Chen Chen
Center for Research in Computer Vision, University of Central Florida, USA