Fed-DPRoC:Communication-Efficient Differentially Private and Robust Federated Learning

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving differential privacy (DP), Byzantine robustness, and communication efficiency in federated learning. We propose the first robustness-compatible compression framework. Our key contribution is the formal definition and realization of “robustness-compatible compression”: we theoretically prove that Johnson–Lindenstrauss (JL) transformation preserves compatibility with robust aggregation—ensuring that compressed model updates satisfy $(varepsilon,delta)$-DP while retaining resilience against Byzantine attacks. Our method applies JL-based compression to reduce communication overhead and integrates robust aggregators (e.g., trimmed mean) to enable efficient, secure, and privacy-preserving model aggregation. Experiments on CIFAR-10 and Fashion-MNIST demonstrate that our approach significantly outperforms existing DP-robust baselines under diverse Byzantine attacks, improving model accuracy by 3.2–8.7 percentage points and reducing communication volume by approximately 60%.

Technology Category

Application Category

📝 Abstract
We propose Fed-DPRoC, a novel federated learning framework that simultaneously ensures differential privacy (DP), Byzantine robustness, and communication efficiency. We introduce the concept of robust-compatible compression, which enables users to compress DP-protected updates while maintaining the robustness of the aggregation rule. We instantiate our framework as RobAJoL, combining the Johnson-Lindenstrauss (JL) transform for compression with robust averaging for robust aggregation. We theoretically prove the compatibility of JL transform with robust averaging and show that RobAJoL preserves robustness guarantees, ensures DP, and reduces communication cost. Experiments on CIFAR-10 and Fashion MNIST validate our theoretical claims and demonstrate that RobAJoL outperforms existing methods in terms of robustness and utility under different Byzantine attacks.
Problem

Research questions and friction points this paper is trying to address.

Ensures differential privacy in federated learning
Achieves Byzantine robustness in distributed systems
Reduces communication costs with efficient compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust-compatible compression for DP updates
JL transform with robust averaging
Preserves robustness, DP, reduces communication
🔎 Similar Papers
No similar papers found.