Sparse Decentralized Federated Learning

📅 2023-08-31
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the high communication overhead, training instability, and low privacy credibility in decentralized federated learning (DFL), this paper proposes a sparsity-aware modeling framework with partial neighbor selection, introducing SDFL—the first decentralized learning framework under explicit sparsity constraints. Methodologically, it integrates three key components: (1) the CEPS optimizer, which guarantees convergence and satisfies ε-differential privacy; (2) single-bit compressed sensing for ultra-low-bandwidth communication; and (3) graph-topology-aware distributed stochastic gradient descent coupled with sparse optimization. Theoretically, SDFL is proven to converge under non-convex settings while strictly adhering to a pre-specified privacy budget. Empirically, on multiple benchmark datasets, SDFL significantly reduces both communication and computational costs—by up to an order of magnitude—while preserving model accuracy and training robustness.
📝 Abstract
Decentralized Federated Learning (DFL) enables collaborative model training without a central server but faces challenges in efficiency, stability, and trustworthiness due to communication and computational limitations among distributed nodes. To address these critical issues, we introduce a sparsity constraint on the shared model, leading to Sparse DFL (SDFL), and propose a novel algorithm, CEPS. The sparsity constraint facilitates the use of one-bit compressive sensing to transmit one-bit information between partially selected neighbour nodes at specific steps, thereby significantly improving communication efficiency. Moreover, we integrate differential privacy into the algorithm to ensure privacy preservation and bolster the trustworthiness of the learning process. Furthermore, CEPS is underpinned by theoretical guarantees regarding both convergence and privacy. Numerical experiments validate the effectiveness of the proposed algorithm in improving communication and computation efficiency while maintaining a high level of trustworthiness.
Problem

Research questions and friction points this paper is trying to address.

Improves communication efficiency in decentralized federated learning
Ensures privacy preservation with differential privacy integration
Enhances trustworthiness and stability in distributed model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse DFL with one-bit compressive sensing
CEPS algorithm ensures differential privacy
Theoretical guarantees for convergence and privacy
🔎 Similar Papers
No similar papers found.
S
Shan Sha
Shenglong Zhou
Shenglong Zhou
University of Science and Technology of China
Computer VisionImage RegistrationTransfer Learning
L
Lingchen Kong
G
Geoffrey Y. Li
ITP Lab, Department of Electrical and Electronic Engineering, Imperial College London, London SW72AZ, United Kingdom