🤖 AI Summary
To address the dual challenges of data heterogeneity and differential privacy (DP) preservation in single-round federated learning (FL), this paper proposes FedDiff—a novel framework that introduces diffusion models to single-round FL for the first time. FedDiff generates high-fidelity synthetic data to align heterogeneous client data distributions, thereby mitigating the non-IID problem. Furthermore, it incorporates a Fourier Magnitude Filtering (FMF) mechanism that significantly enhances the fidelity of synthetic samples under DP noise. Experimental results demonstrate that FedDiff outperforms state-of-the-art single-round FL methods even under stringent DP constraints, achieving substantial improvements in model accuracy. Crucially, FedDiff requires only one round of communication, reducing communication overhead to 1/N of conventional multi-round FL. Thus, it simultaneously achieves strong privacy guarantees, exceptional communication efficiency, and competitive model performance.
📝 Abstract
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy. However, FL faces challenges in terms of communication cost and data heterogeneity. One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks. Nevertheless, data heterogeneity remains a significant challenge, impacting performance. This work explores the effectiveness of diffusion models in one-shot FL, demonstrating their applicability in addressing data heterogeneity and improving FL performance. Additionally, we investigate the utility of our diffusion model approach, FedDiff, compared to other one-shot FL methods under differential privacy (DP). Furthermore, to improve generated sample quality under DP settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method, enhancing the effectiveness of generated data for global model training.