๐ค AI Summary
Federated learning (FL) faces privacy risks from semi-honest servers or malicious clients launching data reconstruction attacks via model updates. Existing multi-key homomorphic encryption (MKHE)-based privacy-preserving FL schemes suffer from prohibitively high computational and communication overhead, hindering practical deployment. This paper proposes MASER, an efficient MKHE-FL framework that innovatively integrates consensus-driven model pruning with gradient/parameter slicing into the MKHE-based secure aggregation pipelineโenabling strict privacy guarantees while substantially reducing costs. Experiments demonstrate that MASER accelerates state-of-the-art MKHE-FL methods by 3.03โ8.29ร; incurs only 1.48โ5ร additional overhead over standard FL; and maintains comparable classification accuracy under both IID and non-IID data distributions.
๐ Abstract
Federated Learning (FL) is susceptible to privacy attacks, such as data reconstruction attacks, in which a semi-honest server or a malicious client infers information about other clients' datasets from their model updates or gradients. To enhance the privacy of FL, recent studies combined Multi-Key Homomorphic Encryption (MKHE) and FL, making it possible to aggregate the encrypted model updates using different keys without having to decrypt them. Despite the privacy guarantees of MKHE, existing approaches are not well-suited for real-world deployment due to their high computation and communication overhead. We propose MASER, an efficient MKHE-based Privacy-Preserving FL framework that combines consensus-based model pruning and slicing techniques to reduce this overhead. Our experimental results show that MASER is 3.03 to 8.29 times more efficient than existing MKHE-based FL approaches in terms of computation and communication overhead while maintaining comparable classification accuracy to standard FL algorithms. Compared to a vanilla FL algorithm, the overhead of MASER is only 1.48 to 5 times higher, striking a good balance between privacy, accuracy, and efficiency in both IID and non-IID settings.