FedPPA: Progressive Parameter Alignment for Personalized Federated Learning

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of personalized modeling in federated learning (FL) under concurrent client heterogeneity in computational capability and non-IID data distributions, this paper proposes Progressive Parameter Alignment (PPA). PPA mitigates local-global model divergence by progressively aligning global model weights in stages, while preserving client-specific knowledge via an entropy-weighted aggregation mechanism based on local prediction uncertainty. Crucially, PPA is the first personalized FL method to explicitly co-model both model heterogeneity—induced by computational disparities—and data heterogeneity (non-IID), thereby achieving robust personalization under dual heterogeneity. Extensive experiments on MNIST, FMNIST, and CIFAR-10 demonstrate that PPA consistently outperforms state-of-the-art baselines in both personalized accuracy and convergence stability, yielding average improvements of 1.8–3.2 percentage points.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is designed as a decentralized, privacy-preserving machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their data. In real-world scenarios, however, clients often have heterogeneous computational resources and hold non-independent and identically distributed data (non-IID), which poses significant challenges during training. Personalized Federated Learning (PFL) has emerged to address these issues by customizing models for each client based on their unique data distribution. Despite its potential, existing PFL approaches typically overlook the coexistence of model and data heterogeneity arising from clients with diverse computational capabilities. To overcome this limitation, we propose a novel method, called Progressive Parameter Alignment (FedPPA), which progressively aligns the weights of common layers across clients with the global model's weights. Our approach not only mitigates inconsistencies between global and local models during client updates, but also preserves client's local knowledge, thereby enhancing personalization robustness in non-IID settings. To further enhance the global model performance while retaining strong personalization, we also integrate entropy-based weighted averaging into the FedPPA framework. Experiments on three image classification datasets, including MNIST, FMNIST, and CIFAR-10, demonstrate that FedPPA consistently outperforms existing FL algorithms, achieving superior performance in personalized adaptation.
Problem

Research questions and friction points this paper is trying to address.

Addressing model and data heterogeneity in federated learning systems
Aligning local and global model weights for improved consistency
Enhancing personalization robustness in non-IID data settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive alignment of common layers with global model
Preserves local knowledge for robust personalization
Integrates entropy-based weighted averaging for enhanced performance
🔎 Similar Papers
No similar papers found.
M
Maulidi Adi Prasetia
Universitas Gadjah Mada, Indonesia
M
Muhamad Risqi U. Saputra
Monash University, Indonesia
Guntur Dharma Putra
Guntur Dharma Putra
Assistant Professor at Universitas Gadjah Mada
Distributed SystemsIoTBlockchainSecurity and PrivacyApplied Machine Learning