🤖 AI Summary
This work addresses the inherent trade-off among privacy preservation, model accuracy, and system efficiency in federated learning, where differential privacy (DP) often degrades model performance and homomorphic encryption (HE) incurs prohibitive computational overhead. To reconcile these competing objectives, the authors propose Alt-FL, a novel framework featuring a round-wise interleaving mechanism that dynamically alternates among synthetic data with private inference (PI), DP, and HE during training. Evaluated under a unified adversary-centric assessment framework on CIFAR-10 and Fashion-MNIST, the approach demonstrates adaptive superiority: PI offers optimal privacy guarantees against strong gradient inversion attacks under high privacy requirements, while DP provides better efficiency under moderate privacy constraints. This enables practitioners to select the most suitable strategy based on their specific resource and privacy needs.
📝 Abstract
In federated learning (FL), balancing privacy protection, learning quality, and efficiency remains a challenge. Privacy protection mechanisms, such as Differential Privacy (DP), degrade learning quality, or, as in the case of Homomorphic Encryption (HE), incur substantial system overhead. To address this, we propose Alt-FL, a privacy-preserving FL framework that combines DP, HE, and synthetic data via a novel round-based interleaving strategy. Alt-FL introduces three new methods, Privacy Interleaving (PI), Synthetic Interleaving with DP (SI/DP), and Synthetic Interleaving with HE (SI/HE), that enable flexible quality-efficiency trade-offs while providing privacy protection. We systematically evaluate Alt-FL against representative reconstruction attacks, including Deep Leakage from Gradients, Inverting Gradients, When the Curious Abandon Honesty, and Robbing the Fed, using a LeNet-5 model on CIFAR-10 and Fashion-MNIST. To enable fair comparison between DP- and HE-based defenses, we introduce a new attacker-centric framework that compares empirical attack success rates across the three proposed interleaving methods. Our results show that, for the studied attacker model and dataset, PI achieves the most balanced trade-offs at high privacy protection levels, while DP-based methods are preferable at intermediate privacy requirements. We also discuss how such results can be the basis for selecting privacy-preserving FL methods under varying privacy and resource constraints.