🤖 AI Summary
Federated learning (FL) faces dual challenges: client data privacy leakage and Byzantine attacks, with existing approaches often sacrificing privacy for robustness. This paper proposes the first communication-efficient, information-theoretically private, and Byzantine-resilient FL framework. Our method introduces a lightweight trusted third party for preprocessing and leverages a small representative dataset, integrating an enhanced FLTrust mechanism, lightweight data transformation, information-theoretically secure aggregation, and convergence-driven design. We theoretically establish that the framework simultaneously satisfies information-theoretic privacy—without relying on differential privacy assumptions—strong Byzantine resilience—tolerating an arbitrary fraction of malicious clients—and guaranteed global convergence. Empirical evaluation demonstrates that our framework significantly reduces communication overhead while maintaining high model accuracy and robustness against diverse Byzantine attacks.
📝 Abstract
Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients. Existing works treating privacy and security jointly make sacrifices on the privacy guarantee. In this work, we introduce LoByITFL, the first communication-efficient Information-Theoretic (IT) private and secure FL scheme that makes no sacrifices on the privacy guarantees while ensuring security against Byzantine adversaries. The key ingredients are a small and representative dataset available to the federator, a careful transformation of the FLTrust algorithm and the use of a trusted third party only in a one-time preprocessing phase before the start of the learning algorithm. We provide theoretical guarantees on privacy and Byzantine-resilience, and provide convergence guarantee and experimental results validating our theoretical findings.