LSHFed: Robust and Communication-Efficient Federated Learning with Locally-Sensitive Hashing Gradient Mapping

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces dual threats: inference attacks compromising client privacy and poisoning attacks degrading model robustness; existing defenses often suffer from excessive communication overhead or insufficient detection accuracy. This paper proposes a communication-efficient and robust FL security framework. Its core innovation is a multi-hyperplane locality-sensitive hashing (LSH) scheme that performs irreversible binary encoding of local gradients, enabling lightweight anomaly detection and aggregation verification directly in the hash domain. The method simultaneously preserves gradient privacy and enables reliable identification of malicious updates, substantially reducing communication costs. Experimental results demonstrate that the framework maintains model accuracy even under 50% client collusion attacks. Gradient verification incurs up to three orders of magnitude lower communication overhead compared to baselines, while achieving significantly higher detection accuracy.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training across distributed nodes without exposing raw data, but its decentralized nature makes it vulnerable in trust-deficient environments. Inference attacks may recover sensitive information from gradient updates, while poisoning attacks can degrade model performance or induce malicious behaviors. Existing defenses often suffer from high communication and computation costs, or limited detection precision. To address these issues, we propose LSHFed, a robust and communication-efficient FL framework that simultaneously enhances aggregation robustness and privacy preservation. At its core, LSHFed incorporates LSHGM, a novel gradient verification mechanism that projects high-dimensional gradients into compact binary representations via multi-hyperplane locally-sensitive hashing. This enables accurate detection and filtering of malicious gradients using only their irreversible hash forms, thus mitigating privacy leakage risks and substantially reducing transmission overhead. Extensive experiments demonstrate that LSHFed maintains high model performance even when up to 50% of participants are collusive adversaries while achieving up to a 1000x reduction in gradient verification communication compared to full-gradient methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing security against inference and poisoning attacks in federated learning
Reducing communication costs in decentralized model training systems
Improving malicious gradient detection while preserving participant privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses locally-sensitive hashing for gradient mapping
Detects malicious gradients via binary hash representations
Reduces communication costs while preserving privacy
🔎 Similar Papers
No similar papers found.
Guanjie Cheng
Guanjie Cheng
Assistant Professor, School of Software Technology, Zhejiang University
AIoTMuti-Agent CollaborationEdge ComputingData Security and BlockchainPrivacy Protection
M
Mengzhen Yang
School of Software Technology, Zhejiang University
X
Xinkui Zhao
School of Software Technology, Zhejiang University
S
Shuyi Yu
School of Software Technology, Zhejiang University
Tianyu Du
Tianyu Du
Zhejiang University
AI SecurityAdversarial Machine Learning
Yangyang Wu
Yangyang Wu
Zhejiang University
Large Language ModelData CleaningMulti-modal Analysis
Mengying Zhu
Mengying Zhu
Zhejiang University
Online learningfintechportfolio
S
Shuiguang Deng
College of Computer Science and Technology, Zhejiang University