🤖 AI Summary
In federated learning, the coexistence of untrusted servers and Byzantine clients makes it challenging to simultaneously achieve robust aggregation and information-theoretic privacy. This paper proposes the first federated aggregation scheme that jointly achieves Byzantine fault tolerance (BFT) and full information-theoretic privacy. It employs Lagrange coding and verifiable secret sharing for secure gradient distribution; introduces a gradient rerandomization mechanism to mitigate model poisoning attacks; and—novelly—embeds ReLU-based trust scoring into the privacy-preserving framework via polynomial approximation, enabling dynamic, maliciousness-resilient weight assignment. Theoretically, the scheme guarantees model convergence under an *f*-Byzantine adversary and ensures strict information-theoretic secrecy of users’ raw gradients from both the server and other participants. Extensive experiments demonstrate its high accuracy and strong privacy preservation even under high Byzantine participation rates.
📝 Abstract
Federated learning (FL) shows great promise in large-scale machine learning but brings new risks in terms of privacy and security. We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users. Our scheme builds on the preexisting non-private FLTrust scheme, which tolerates malicious users through trust scores (TS) that attenuate or amplify the users' gradient updates. The trust scores are based on the ReLU function, which we approximate by a polynomial. The distributed and privacy-preserving computation in ByITFL is designed using a combination of Lagrange coded computing, verifiable secret sharing and re-randomization steps. ByITFL is the first Byzantine resilient scheme for FL with full information-theoretic privacy.