🤖 AI Summary
Gradient inversion attacks pose severe threats to data privacy in federated learning, while full homomorphic encryption remains impractical due to prohibitive computational overhead. This work systematically investigates selective encryption as a defense mechanism. We propose a distance-based gradient saliency analysis framework and provide the first empirical validation of gradient magnitude’s universal effectiveness against optimization-based inversion attacks. Integrating layer-wise sensitivity and other metrics, we evaluate robustness against state-of-the-art attacks across diverse architectures—including LeNet, CNN, BERT, and GPT-2. Results show that judicious selection of encrypted gradient subsets reduces computational overhead by over 60%, while preserving model accuracy and strong privacy guarantees. However, no universally optimal encryption strategy exists; instead, the optimal subset must be dynamically adapted to model architecture and privacy budget. This work establishes theoretical foundations and delivers deployable, practical guidelines for efficient privacy-preserving federated learning.
📝 Abstract
Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning, enabling malicious parties to reconstruct sensitive local training data from gradient communications between clients and an aggregation server during the aggregation process. While traditional encryption-based defenses, such as homomorphic encryption, offer strong privacy guarantees without compromising model utility, they often incur prohibitive computational overheads. To mitigate this, selective encryption has emerged as a promising approach, encrypting only a subset of gradient data based on the data's significance under a certain metric. However, there have been few systematic studies on how to specify this metric in practice. This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks. Our findings demonstrate the feasibility of selective encryption in reducing computational overhead while maintaining resilience against attacks. We propose a distance-based significance analysis framework that provides theoretical foundations for selecting critical gradient elements for encryption. Through extensive experiments on different model architectures (LeNet, CNN, BERT, GPT-2) and attack types, we identify gradient magnitude as a generally effective metric for protection against optimization-based gradient inversions. However, we also observe that no single selective encryption strategy is universally optimal across all attack scenarios, and we provide guidelines for choosing appropriate strategies for different model architectures and privacy requirements.