DFPL: Decentralized Federated Prototype Learning Across Heterogeneous Data Distributions

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation caused by statistical heterogeneity in decentralized federated learning, this paper proposes the Decentralized Federated Prototype Learning (DFPL) framework. DFPL is the first to integrate prototype-based learning into a serverless distributed training paradigm, mitigating data distribution shift via local prototype modeling and enabling joint scheduling of communication, computation, and consensus resources through a co-designed training-and-blockchain-mining mechanism. We provide theoretical convergence guarantees and introduce a resource-aware distributed optimization analysis method. Experiments on three heterogeneous datasets demonstrate that DFPL achieves an average 4.2% improvement in test accuracy, reduces communication overhead by 67%, and accelerates convergence by 2.1×, significantly enhancing both model robustness and system efficiency.

Technology Category

Application Category

📝 Abstract
Federated learning is a distributed machine learning paradigm that enables the collaborative training of multiple clients through centralized model aggregation. However, standard federated learning relies on a centralized server, making it vulnerable to server failures. While existing solutions utilize blockchain technology to implement Decentralized Federated Learning (DFL), the statistical heterogeneity of data distributions among clients severely degrades the DFL's performance. Driven by this issue, this paper proposes a decentralized federated prototype learning framework, named DFPL, which significantly improves the performance of distributed machine learning across heterogeneous data distributions. Specifically, our framework introduces prototype learning into DFL to address statistical heterogeneity, which greatly reduces the number of parameters exchanged between clients. Additionally, blockchain is embedded into our framework, enabling the training and mining processes to be implemented at each client. From a theoretical perspective, we provide convergence guarantee of DFPL by combining resource allocation for training and mining. The experiments highlight the superiority of our DFPL framework in communication efficiency and test performance across three benchmark datasets with heterogeneous data distributions.
Problem

Research questions and friction points this paper is trying to address.

Decentralized federated learning vulnerable to data heterogeneity
Reducing parameter exchange in heterogeneous client distributions
Integrating blockchain for client-based training and mining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized federated prototype learning framework
Prototype learning reduces exchanged parameters
Blockchain enables client training and mining
🔎 Similar Papers
No similar papers found.
H
Hongliang Zhang
School of Computer Science and Technology, Qilu University of Technology, Jinan, 250353, Shandong, China
F
Fenghua Xu
Cyber Security Institute, University of Science and Technology of China, Hefei, 230026, Anhui, China
Zhongyuan Yu
Zhongyuan Yu
School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000, Gansu, China
Chunqiang Hu
Chunqiang Hu
Professor of Big Data & Software Engineering, Chongqing University.
Data-Driven Security and PrivacyAlgorithm Design and Analysis
Shanchen Pang
Shanchen Pang
China University of Petroleum
AIPetri NetCloud ComputingEdge Computing
X
Xiaofen Wang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
J
Jiguo Yu
School of Computer Science and Technology, Qilu University of Technology, Jinan, 250353, Shandong, China; School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China