🤖 AI Summary
To address the dual security threats of privacy leakage via gradient inversion and model poisoning by malicious clients in federated learning, this paper proposes the first unified defense framework. The method leverages neural network weight shuffling and formally models its equivariant/invariant properties to design a lightweight permutation verification mechanism, enabling simultaneous mitigation of both threats without compromising model accuracy. The framework is computationally efficient and deployable on resource-constrained embedded platforms. Extensive experiments across multiple benchmark datasets demonstrate robust protection against both attack vectors, achieving up to 6.7× improvement in computational efficiency while preserving original model accuracy. The core contribution lies in the first systematic integration of symmetry-aware modeling of weight shuffling with a verifiable permutation mechanism for holistic, dual-threat defense in federated learning.
📝 Abstract
Federated learning enables decentralized model training without sharing raw data, preserving data privacy. However, its vulnerability towards critical security threats, such as gradient inversion and model poisoning by malicious clients, remain unresolved. Existing solutions often address these issues separately, sacrificing either system robustness or model accuracy. This work introduces Tazza, a secure and efficient federated learning framework that simultaneously addresses both challenges. By leveraging the permutation equivariance and invariance properties of neural networks via weight shuffling and shuffled model validation, Tazza enhances resilience against diverse poisoning attacks, while ensuring data confidentiality and high model accuracy. Comprehensive evaluations on various datasets and embedded platforms show that Tazza achieves robust defense with up to 6.7x improved computational efficiency compared to alternative schemes, without compromising performance.