🤖 AI Summary
To address performance degradation in federated learning caused by mixed heterogeneity—simultaneous feature and label distribution shifts across clients—this paper proposes Personalized Federated Prototype Learning (PFPL). The method’s core contributions are: (i) the first construction of personalized, unbiased class prototypes per client under mixed heterogeneity, augmented with domain knowledge; and (ii) a consistency regularization mechanism that aligns local instance representations with prototypes to enhance cross-client semantic consistency. PFPL integrates prototype-based learning, consistency constraints, and distributed optimization, achieving faster convergence and improved generalization while maintaining low communication overhead. Experiments on Digits and Office-Caltech benchmark datasets demonstrate that PFPL significantly outperforms state-of-the-art baselines: it reduces average communication rounds by 32% and improves classification accuracy by 2.8–5.4 percentage points.
📝 Abstract
Federated learning has received significant attention for its ability to simultaneously protect customer privacy and leverage distributed data from multiple devices for model training. However, conventional approaches often focus on isolated heterogeneous scenarios, resulting in skewed feature distributions or label distributions. Meanwhile, data heterogeneity is actually a key factor in improving model performance. To address this issue, we propose a new approach called PFPL in mixed heterogeneous scenarios. The method provides richer domain knowledge and unbiased convergence targets by constructing personalized, unbiased prototypes for each client. Moreover, in the local update phase, we introduce consistent regularization to align local instances with their personalized prototypes, which significantly improves the convergence of the loss function. Experimental results on Digits and Office Caltech datasets validate the effectiveness of our approach and successfully reduce the communication cost.