TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning

📅 2024-09-01
🏛️ IEEE Transactions on Dependable and Secure Computing
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Malicious aggregators in multi-party federated learning pose severe gradient leakage and privacy risks. Method: This paper introduces Threshold Fully Homomorphic Encryption (TFHE) into secure aggregation for the first time, proposing a decentralized privacy-preserving training framework tolerant to a bounded number of malicious aggregators. It eliminates reliance on trusted third parties by integrating secure multi-party computation with formal verification, effectively countering novel disaggregation attacks. Contribution/Results: We provide rigorous theoretical proofs establishing strict differential privacy and collusion resistance. Empirical evaluation demonstrates that the framework maintains state-of-the-art model accuracy while reducing communication overhead by 29–45%, and delivers end-to-end privacy guarantees under diverse strong adversarial models, including active and adaptive adversaries.

Technology Category

Application Category

📝 Abstract
Federated learning is a computing paradigm that enhances privacy by enabling multiple parties to collaboratively train a machine learning model without revealing personal data. However, current research indicates that traditional federated learning platforms are unable to ensure privacy due to privacy leaks caused by the interchange of gradients. To achieve privacy-preserving federated learning, integrating secure aggregation mechanisms is essential. Unfortunately, existing solutions are vulnerable to recently demonstrated inference attacks such as the disaggregation attack. This article proposes TAPFed, an approach for achieving privacy-preserving federated learning in the context of multiple decentralized aggregators with malicious actors. TAPFed uses a proposed threshold functional encryption scheme and allows for a certain number of malicious aggregators while maintaining security and privacy. We provide formal security and privacy analyses of TAPFed and compare it to various baselines through experimental evaluation. Our results show that TAPFed offers equivalent performance in terms of model quality compared to state-of-the-art approaches while reducing transmission overhead by 29%–45% across different model training scenarios. Most importantly, TAPFed can defend against recently demonstrated inference attacks caused by curious aggregators, which the majority of existing approaches are susceptible to.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Privacy Protection
Information Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Threshold-Secure Aggregation
Privacy-Preserving Federated Learning
Malicious Attack Resilience
🔎 Similar Papers
No similar papers found.
Runhua Xu
Runhua Xu
Beihang University | former RSM@IBM Research
privacy-enhancing tech.security/privacy in AI/MLapplied cryptoblockchain
B
Bo Li
School of Computer Science and Engineering, Beihang University, Beijing, China, 100191; Zhongguancun Laboratory, Beijing, China
C
Chao Li
Beijing Key Laboratory of Security and Privacy in Intelligent Transportation, Beijing Jiaotong University, China, 100044
J
J. Joshi
University of Pittsburgh, USA, 15260
S
Shuai Ma
School of Computer Science and Engineering, Beihang University, Beijing, China, 100191
J
Jianxin Li
School of Computer Science and Engineering, Beihang University, Beijing, China, 100191; Zhongguancun Laboratory, Beijing, China