Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises

📅 2024-05-14
🏛️ Information Theory Workshop
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, the coexistence of untrusted servers and Byzantine clients makes it challenging to simultaneously achieve robust aggregation and information-theoretic privacy. This paper proposes the first federated aggregation scheme that jointly achieves Byzantine fault tolerance (BFT) and full information-theoretic privacy. It employs Lagrange coding and verifiable secret sharing for secure gradient distribution; introduces a gradient rerandomization mechanism to mitigate model poisoning attacks; and—novelly—embeds ReLU-based trust scoring into the privacy-preserving framework via polynomial approximation, enabling dynamic, maliciousness-resilient weight assignment. Theoretically, the scheme guarantees model convergence under an *f*-Byzantine adversary and ensures strict information-theoretic secrecy of users’ raw gradients from both the server and other participants. Extensive experiments demonstrate its high accuracy and strong privacy preservation even under high Byzantine participation rates.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) shows great promise in large-scale machine learning but brings new risks in terms of privacy and security. We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users. Our scheme builds on the preexisting non-private FLTrust scheme, which tolerates malicious users through trust scores (TS) that attenuate or amplify the users' gradient updates. The trust scores are based on the ReLU function, which we approximate by a polynomial. The distributed and privacy-preserving computation in ByITFL is designed using a combination of Lagrange coded computing, verifiable secret sharing and re-randomization steps. ByITFL is the first Byzantine resilient scheme for FL with full information-theoretic privacy.
Problem

Research questions and friction points this paper is trying to address.

Ensures privacy in federated learning against Byzantine users
Combines secure aggregation with Byzantine resilience techniques
Provides information-theoretic privacy without compromising security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial approximation of ReLU for trust scores
Lagrange coded computing for privacy
Verifiable secret sharing for security
🔎 Similar Papers
No similar papers found.