GCFL: A Gradient Correction-based Federated Learning Framework for Privacy-preserving CPSS

πŸ“… 2025-06-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing differentially private (DP) federated learning approaches suffer from degraded convergence speed and substantial classification accuracy loss in privacy-sensitive cyber-physical-social systems (CPSS), primarily due to noise-induced gradient distortion. To address this, we propose a server-driven DP federated learning framework featuring a novel gradient bias detection and projection correction mechanism, which dynamically calibrates noisy gradients while strictly satisfying $(varepsilon,delta)$-differential privacy. Our method integrates gradient clipping, Gaussian noise injection, client-side local training, and multi-client gradient alignment optimization. Extensive experiments on multiple benchmark datasets demonstrate that the proposed framework achieves state-of-the-art (SOTA) classification accuracy under identical privacy budgets, with up to 32% faster convergence compared to mainstream DP-Fed methods.

Technology Category

Application Category

πŸ“ Abstract
Federated learning, as a distributed architecture, shows great promise for applications in Cyber-Physical-Social Systems (CPSS). In order to mitigate the privacy risks inherent in CPSS, the integration of differential privacy with federated learning has attracted considerable attention. Existing research mainly focuses on dynamically adjusting the noise added or discarding certain gradients to mitigate the noise introduced by differential privacy. However, these approaches fail to remove the noise that hinders convergence and correct the gradients affected by the noise, which significantly reduces the accuracy of model classification. To overcome these challenges, this paper proposes a novel framework for differentially private federated learning that balances rigorous privacy guarantees with accuracy by introducing a server-side gradient correction mechanism. Specifically, after clients perform gradient clipping and noise perturbation, our framework detects deviations in the noisy local gradients and employs a projection mechanism to correct them, mitigating the negative impact of noise. Simultaneously, gradient projection promotes the alignment of gradients from different clients and guides the model towards convergence to a global optimum. We evaluate our framework on several benchmark datasets, and the experimental results demonstrate that it achieves state-of-the-art performance under the same privacy budget.
Problem

Research questions and friction points this paper is trying to address.

Mitigating privacy risks in CPSS using federated learning
Reducing noise impact on model convergence in differential privacy
Improving classification accuracy via gradient correction mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Server-side gradient correction mechanism
Noise deviation detection and projection
Alignment of gradients for global convergence
πŸ”Ž Similar Papers
No similar papers found.
J
Jiayi Wan
School of Software Engineering, Nanjing University of Information Science and Technology, Nanjing, China
X
Xiang Zhu
College of Meteorology and Oceanography, National University of Defense Technology, Changsha, China
F
Fanzhen Liu
CSIRO’s Data61, Sydney, Australia
W
Wei Fan
Medical Sciences Division, University of Oxford, Oxford, UK
Xiaolong Xu
Xiaolong Xu
2019~2025 Ant Group/2025~Now ByteDance
Graph Neural NetworksKnowledge GraphFederated Learning