Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks

๐Ÿ“… 2023-08-01
๐Ÿ›๏ธ IEEE Transactions on Mobile Computing
๐Ÿ“ˆ Citations: 4
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Federated learning (FL) faces three critical challenges in large-scale deployment: privacy leakage, malicious user poisoning attacks, and excessive communication overheadโ€”commonly addressed in isolation at the cost of model accuracy. This paper proposes the Compressed Private Aggregation (CPA) framework, the first to jointly integrate nested lattice quantization, random codebook-based compression, and local differential privacy (LDP) perturbation within a single unified pipeline. CPA simultaneously achieves LDP guarantees, user anonymity, and robust aggregation. We theoretically prove that CPA attains the same convergence rate as standard FL. Empirically, on image classification tasks, CPA significantly outperforms decoupled compression-plus-privacy baselines under extremely low bitrates (a few bits per model parameter), reducing communication overhead by one to two orders of magnitude. Moreover, it demonstrates strong robustness against poisoning attacks while maintaining accuracy close to that of non-private FL.
๐Ÿ“ Abstract
Federated learning (FL) is an emerging paradigm that allows a central server to train machine learning models using remote users' data. Despite its growing popularity, FL faces challenges in preserving the privacy of local datasets, its sensitivity to poisoning attacks by malicious users, and its communication overhead. The latter is additionally considerably dominant in large-scale networks. These limitations are often individually mitigated by local differential privacy (LDP) mechanisms, robust aggregation, compression, and user selection techniques, which typically come at the cost of accuracy. In this work, we present compressed private aggregation (CPA), that allows massive deployments to simultaneously communicate at extremely low bit rates while achieving privacy, anonymity, and resilience to malicious users. CPA randomizes a codebook for compressing the data into a few bits using nested lattice quantizers, while ensuring anonymity and robustness, with a subsequent perturbation to hold LDP. The proposed CPA is proven to result in FL convergence in the same asymptotic rate as FL without privacy, compression, and robustness considerations, while satisfying both anonymity and LDP requirements. These analytical properties are empirically confirmed in a numerical study, where we demonstrate the performance gains of CPA compared with separate mechanisms for compression and privacy for training different image classification models, as well as its robustness in mitigating the harmful effects of malicious users.
Problem

Research questions and friction points this paper is trying to address.

Preserving privacy in federated learning with LDP
Reducing communication overhead in large-scale FL networks
Ensuring robustness against poisoning attacks by malicious users
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses nested lattice quantizers for compression
Ensures anonymity and robustness in aggregation
Combines LDP with compression for privacy
๐Ÿ”Ž Similar Papers
No similar papers found.