Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning faces dual threats from Byzantine attacks and privacy inference attacks, yet existing defenses often incur prohibitive computational or communication overhead, hindering practical deployment. To address this, we propose ABBR—a lightweight, deployable framework that innovatively integrates dimensionality reduction (PCA or random projection) into differentially private robust aggregation, significantly accelerating computationally intensive filtering operations. We theoretically analyze the filtering error induced by low-dimensional projections and design an adaptive threshold tuning strategy to prevent malicious models from evading detection. ABBR seamlessly incorporates established robust aggregators (e.g., Krum, Bulyan). Extensive evaluation on public benchmarks demonstrates that ABBR achieves substantial speedup, incurs nearly zero additional communication cost, matches or exceeds baseline methods in Byzantine resilience, and provides strong protection against privacy inference attacks—thereby reconciling efficiency, robustness, and privacy in real-world federated learning systems.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses Byzantine attacks and privacy inference in federated learning
Reduces computational and communication overhead in existing defenses
Enhances efficiency while maintaining Byzantine-resilience and privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses dimensionality reduction to speed private computation
Adaptive tuning minimizes impact of bypassing malicious models
Combines Byzantine-robust aggregation with privacy-preserving filtering
🔎 Similar Papers
No similar papers found.
Baolei Zhang
Baolei Zhang
Nankai University
Minghong Fang
Minghong Fang
University of Louisville
SecurityPrivacyAI SafetyMachine Learning
Zhuqing Liu
Zhuqing Liu
Assistant Professor of Computer Science and Engineering, University of North Texas
Biao Yi
Biao Yi
Nankai University
LLM SecurityTrustworthy LLMSteganography
P
Peizhao Zhou
College of Computer Science, Nankai University, China
Y
Yuan Wang
School of Mathematical Sciences, Nankai University, China
T
Tong Li
College of Cyber Science, Nankai University, China
Z
Zheli Liu
College of Cyber Science, Nankai University, China