LoByITFL: Low Communication Secure and Private Federated Learning

📅 2024-05-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces dual challenges: client data privacy leakage and Byzantine attacks, with existing approaches often sacrificing privacy for robustness. This paper proposes the first communication-efficient, information-theoretically private, and Byzantine-resilient FL framework. Our method introduces a lightweight trusted third party for preprocessing and leverages a small representative dataset, integrating an enhanced FLTrust mechanism, lightweight data transformation, information-theoretically secure aggregation, and convergence-driven design. We theoretically establish that the framework simultaneously satisfies information-theoretic privacy—without relying on differential privacy assumptions—strong Byzantine resilience—tolerating an arbitrary fraction of malicious clients—and guaranteed global convergence. Empirical evaluation demonstrates that our framework significantly reduces communication overhead while maintaining high model accuracy and robustness against diverse Byzantine attacks.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients. Existing works treating privacy and security jointly make sacrifices on the privacy guarantee. In this work, we introduce LoByITFL, the first communication-efficient Information-Theoretic (IT) private and secure FL scheme that makes no sacrifices on the privacy guarantees while ensuring security against Byzantine adversaries. The key ingredients are a small and representative dataset available to the federator, a careful transformation of the FLTrust algorithm and the use of a trusted third party only in a one-time preprocessing phase before the start of the learning algorithm. We provide theoretical guarantees on privacy and Byzantine-resilience, and provide convergence guarantee and experimental results validating our theoretical findings.
Problem

Research questions and friction points this paper is trying to address.

Ensures privacy and security in Federated Learning
Achieves communication efficiency without sacrificing privacy
Provides Byzantine resilience with theoretical guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses small representative dataset for federator
Modifies FLTrust algorithm for security
Employs trusted third party once initially
🔎 Similar Papers
No similar papers found.