🤖 AI Summary
Federated learning faces dual threats from Byzantine attacks and privacy inference attacks, yet existing defenses often incur prohibitive computational or communication overhead, hindering practical deployment. To address this, we propose ABBR—a lightweight, deployable framework that innovatively integrates dimensionality reduction (PCA or random projection) into differentially private robust aggregation, significantly accelerating computationally intensive filtering operations. We theoretically analyze the filtering error induced by low-dimensional projections and design an adaptive threshold tuning strategy to prevent malicious models from evading detection. ABBR seamlessly incorporates established robust aggregators (e.g., Krum, Bulyan). Extensive evaluation on public benchmarks demonstrates that ABBR achieves substantial speedup, incurs nearly zero additional communication cost, matches or exceeds baseline methods in Byzantine resilience, and provides strong protection against privacy inference attacks—thereby reconciling efficiency, robustness, and privacy in real-world federated learning systems.
📝 Abstract
Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines.