π€ AI Summary
To address the vulnerability of model parameters to privacy inference attacks in federated learning, this paper proposes a synergistic privacy-preserving mechanism integrating randomized masking with gradient quantization. Specifically, clients apply controllable random masking locally prior to low-bit gradient quantization, enhancing parameter irreversibility while achieving communication compression. This is further augmented by differential privacy noise injection and secure aggregation, establishing a multi-tiered privacy guarantee. Compared to single-technique baselines, the method achieves significantly improved adversarial robustness under strict privacy budgets (Ξ΅ β€ 4) without compromising convergence stability. Extensive experiments across image classification and text prediction tasks demonstrate an accuracy drop of less than 1.2%, alongside approximately 40% reduction in communication overhead. These results validate the methodβs effective balance among privacy preservation, model utility, and system efficiency.
π Abstract
Experimental results across various models and tasks demonstrate that our approach not only maintains strong model performance in federated learning settings but also achieves enhanced protection of model parameters compared to baseline methods.