🤖 AI Summary
Personalized federated learning (pFL) lacks accessible, standardized benchmarking platforms, hindering reproducibility and adoption—especially for newcomers. Method: This work introduces the first open-source pFL library and standardized evaluation framework designed for beginners. It systematically integrates 37 state-of-the-art algorithms—including 8 classical federated learning (FL) and 29 pFL methods—and defines three types of statistical heterogeneity scenarios across 24 diverse datasets. Built on PyTorch, the framework features a modular architecture supporting algorithm plug-ins, customizable data partitioning, and multidimensional evaluation (e.g., accuracy, convergence, fairness). Contribution/Results: The library significantly advances standardization and reproducibility in pFL research while lowering entry and experimental barriers. It has garnered over 1,600 GitHub stars and 300+ forks, establishing itself as a mainstream benchmark tool for both pedagogy and research in the field.
📝 Abstract
Amid the ongoing advancements in Federated Learning (FL), a machine learning paradigm that allows collaborative learning with data privacy protection, personalized FL (pFL)has gained significant prominence as a research direction within the FL domain. Whereas traditional FL (tFL) focuses on jointly learning a global model, pFL aims to balance each client's global and personalized goals in FL settings. To foster the pFL research community, we started and built PFLlib, a comprehensive pFL library with an integrated benchmark platform. In PFLlib, we implemented 37 state-of-the-art FL algorithms (8 tFL algorithms and 29 pFL algorithms) and provided various evaluation environments with three statistically heterogeneous scenarios and 24 datasets. At present, PFLlib has gained more than 1600 stars and 300 forks on GitHub.