Enhancing Security and Privacy in Federated Learning using Low-Dimensional Update Representation and Proximity-Based Defense

πŸ“… 2024-05-29
πŸ›οΈ IEEE Transactions on Knowledge and Data Engineering
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Federated learning (FL) faces dual threats of privacy leakage and Byzantine attacks under mutual distrust between clients and servers. This paper proposes FLURP, the first framework integrating low-dimensional update representation (LUR) with a lightweight proximity-based defense leveraging shared distance matrices, achieving end-to-end privacy via optimized secure multi-party computation (SMPC) while efficiently identifying malicious model updates. Key innovations include an β„“βˆž-norm sliding-window compression scheme and joint distance-matrix computation, reducing SMPC communication overhead by three orders of magnitude. Extensive experiments demonstrate that FLURP maintains high accuracy and strong robustness against diverse Byzantine attacks, significantly outperforms state-of-the-art methods in both communication and computational efficiency, and exhibits excellent scalability.

Technology Category

Application Category

πŸ“ Abstract
Federated Learning (FL) is a promising privacy-preserving machine learning paradigm that allows data owners to collaboratively train models while keeping their data localized. Despite its potential, FL faces challenges related to the trustworthiness of both clients and servers, particularly against curious or malicious adversaries. In this paper, we introduce a novel framework named underline{F}ederated underline{L}earning with Low-Dimensional underline{U}pdate underline{R}epresentation and underline{P}roximity-Based defense (FLURP), designed to address privacy preservation and resistance to Byzantine attacks in distributed learning environments. FLURP employs $mathsf{LinfSample}$ method, enabling clients to compute the $l_{infty}$ norm across sliding windows of updates, resulting in a Low-Dimensional Update Representation (LUR). Calculating the shared distance matrix among LURs, rather than updates, significantly reduces the overhead of Secure Multi-Party Computation (SMPC) by three orders of magnitude while effectively distinguishing between benign and poisoned updates. Additionally, FLURP integrates a privacy-preserving proximity-based defense mechanism utilizing optimized SMPC protocols to minimize communication rounds. Our experiments demonstrate FLURP's effectiveness in countering Byzantine adversaries with low communication and runtime overhead. FLURP offers a scalable framework for secure and reliable FL in distributed environments, facilitating its application in scenarios requiring robust data management and security.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in Federated Learning
Defend against Byzantine attacks
Reduce Secure Multi-Party Computation overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Dimensional Update Representation
Proximity-Based Defense Mechanism
Secure Multi-Party Computation
πŸ”Ž Similar Papers
No similar papers found.
W
Wenjie Li
State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China and also with the College of Computing and Data Science, Nanyang Technological University, Singapore 639798
K
K. Fan
State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China
J
Jingyuan Zhang
College of Computing and Data Science, Nanyang Technological University, Singapore 639798
H
Hui Li
State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China
Wei Yang Bryan Lim
Wei Yang Bryan Lim
Assistant Professor, Nanyang Technological University (NTU), Singapore
Edge IntelligenceFederated LearningApplied AISustainable AI
Q
Qiang Yang
Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong 999077, China