ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated Learning Based on Coded Computing and Vector Commitment

📅 2023-02-20
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual threats of Byzantine attacks and privacy leakage in federated learning—exacerbated by high communication overhead of secure aggregation under bandwidth-constrained edge devices—this paper proposes an efficient and robust secure aggregation scheme. Methodologically, it introduces a novel two-phase block-wise Ramp secret sharing mechanism, integrating coded computation principles to enable verifiable secret sharing independent of update length, supporting secure bilinear operations and constant-size vector commitments. Based on this, it designs a privacy-preserving pairwise distance computation protocol for robust anomaly detection. Compared to the state-of-the-art BREA, the scheme significantly reduces communication overhead while guaranteeing aggregation integrity, resilience against colluding adversaries, and end-to-end privacy security.
📝 Abstract
In this paper, we propose ByzSecAgg, an efficient secure aggregation scheme for federated learning that is protected against Byzantine attacks and privacy leakages. Processing individual updates to manage adversarial behavior, while preserving privacy of data against colluding nodes, requires some sort of secure secret sharing. However, the communication load for secret sharing of long vectors of updates can be very high. ByzSecAgg solves this problem by partitioning local updates into smaller sub-vectors and sharing them using ramp secret sharing. However, this sharing method does not admit bi-linear computations, such as pairwise distance calculations, needed by outlier-detection algorithms. To overcome this issue, each user runs another round of ramp sharing, with different embedding of data in the sharing polynomial. This technique, motivated by ideas from coded computing, enables secure computation of pairwise distance. In addition, to maintain the integrity and privacy of the local update, ByzSecAgg also uses a vector commitment method, in which the commitment size remains constant (i.e. does not increase with the length of the local update), while simultaneously allowing verification of the secret sharing process. In terms of communication loads, ByzSecAgg significantly outperforms the state-of-the-art scheme, known as BREA.
Problem

Research questions and friction points this paper is trying to address.

Resisting Byzantine attacks in federated learning
Reducing communication overhead for secure aggregation
Enabling secure distance computations for outlier detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ramp secret sharing for sub-vector updates
Employs coded computing for secure distance computation
Integrates vector commitment for privacy and integrity
🔎 Similar Papers
No similar papers found.
Tayyebeh Jahani-Nezhad
Tayyebeh Jahani-Nezhad
Postdoctoral Researcher at Technische Universität Berlin (TUB)
Machine LearningInformation TheoryDistributed Computing
M
M. Maddah-ali
Department of Electrical and Computer Engineering, University of Minnesota, Twin Cities
G
G. Caire
Electrical Engineering and Computer Science Department, Technische Universität Berlin, 10587 Berlin, Germany