pFedGPA: Diffusion-based Generative Parameter Aggregation for Personalized Federated Learning

📅 2024-09-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance degradation of linear parameter aggregation (e.g., FedAvg) in federated learning under heterogeneous data, this paper proposes the first generative personalized parameter aggregation framework based on diffusion models. The method maps client models into a latent space via parameter inversion encoding and performs nonlinear, distribution-aware parameter fusion at the server via denoising sampling—effectively decoupling the complexities of global and personalized parameter distributions. Unlike conventional linear aggregation, our framework is the first to incorporate diffusion models into federated parameter aggregation, enabling generative personalized modeling. Extensive experiments across multiple benchmark datasets demonstrate that the proposed approach significantly outperforms baselines including FedAvg and pFedMe, achieving average accuracy improvements of 3.2–5.8 percentage points for personalized models.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) offers a decentralized approach to model training, where data remains local and only model parameters are shared between the clients and the central server. Traditional methods, such as Federated Averaging (FedAvg), linearly aggregate these parameters which are usually trained on heterogeneous data distributions, potentially overlooking the complex, high-dimensional nature of the parameter space. This can result in degraded performance of the aggregated model. While personalized FL approaches can mitigate the heterogeneous data issue to some extent, the limitation of linear aggregation remains unresolved. To alleviate this issue, we investigate the generative approach of diffusion model and propose a novel generative parameter aggregation framework for personalized FL, exttt{pFedGPA}. In this framework, we deploy a diffusion model on the server to integrate the diverse parameter distributions and propose a parameter inversion method to efficiently generate a set of personalized parameters for each client. This inversion method transforms the uploaded parameters into a latent code, which is then aggregated through denoising sampling to produce the final personalized parameters. By encoding the dependence of a client's model parameters on the specific data distribution using the high-capacity diffusion model, exttt{pFedGPA} can effectively decouple the complexity of the overall distribution of all clients' model parameters from the complexity of each individual client's parameter distribution. Our experimental results consistently demonstrate the superior performance of the proposed method across multiple datasets, surpassing baseline approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses parameter aggregation in federated learning
Proposes diffusion-based generative aggregation framework
Enhances personalized federated learning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-based parameter aggregation
Generative personalized FL framework
Parameter inversion for latent code
🔎 Similar Papers
No similar papers found.
J
Jiahao Lai
Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
J
Jiaqi Li
Shenzhen International Graduate School, Tsinghua University
J
Jian Xu
Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
Y
Yanru Wu
Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
B
Boshi Tang
Shenzhen International Graduate School, Tsinghua University
S
Siqi Chen
Tsinghua-Berkeley Shenzhen Institute, Tsinghua University
Yongfeng Huang
Yongfeng Huang
Phd Student, Chinese University of Hong Kong
Natural Language Processing
Wenbo Ding
Wenbo Ding
UNIVERSITY AT BUFFALO
securityMachine Learning
Y
Yang Li
Tsinghua-Berkeley Shenzhen Institute, Tsinghua University; Shenzhen International Graduate School, Tsinghua University