Secure Generalization through Stochastic Bidirectional Parameter Updates Using Dual-Gradient Mechanism

๐Ÿ“… 2025-04-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address weak model generalization, insufficient feature representation, and high privacy leakage risks in federated learning, this paper proposes a Random Bidirectional Parameter Update (RBDU) mechanism. RBDU introduces systematic parameter perturbations on the server based on historical global models to generate fine-grained (e.g., filter-level) diverse proximal models for each client, and employs a dual-gradient-driven strategy to jointly optimize generalization and privacy robustness. To our knowledge, RBDU is the first method to simultaneously enhance generalization performance, feature representation capability, and robustness against membership inference attacksโ€”without compromising model utility. Extensive experiments on four benchmark datasets demonstrate that RBDU consistently outperforms state-of-the-art approaches: it achieves an average accuracy gain of 2.1% and reduces membership inference attack success rates by 37.5%, with validation through both quantitative metrics and visual analysis.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated learning (FL) has gained increasing attention due to privacy-preserving collaborative training on decentralized clients, mitigating the need to upload sensitive data to a central server directly. Nonetheless, recent research has underscored the risk of exposing private data to adversaries, even within FL frameworks. In general, existing methods sacrifice performance while ensuring resistance to privacy leakage in FL. We overcome these issues and generate diverse models at a global server through the proposed stochastic bidirectional parameter update mechanism. Using diverse models, we improved the generalization and feature representation in the FL setup, which also helped to improve the robustness of the model against privacy leakage without hurting the model's utility. We use global models from past FL rounds to follow systematic perturbation in parameter space at the server to ensure model generalization and resistance against privacy attacks. We generate diverse models (in close neighborhoods) for each client by using systematic perturbations in model parameters at a fine-grained level (i.e., altering each convolutional filter across the layers of the model) to improve the generalization and security perspective. We evaluated our proposed approach on four benchmark datasets to validate its superiority. We surpassed the state-of-the-art methods in terms of model utility and robustness towards privacy leakage. We have proven the effectiveness of our method by evaluating performance using several quantitative and qualitative results.
Problem

Research questions and friction points this paper is trying to address.

Enhance FL model generalization without privacy leaks
Improve robustness against adversarial data exposure
Generate diverse models via stochastic parameter updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic bidirectional parameter updates enhance security
Dual-gradient mechanism improves model generalization
Fine-grained perturbations boost robustness against privacy attacks
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Shourya Goel
Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, India
H
Himanshi Tibrewal
Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, India
Anant Jain
Anant Jain
Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, India
A
Anshul Pundhir
Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, Uttarakhand, India
Pravendra Singh
Pravendra Singh
Assistant Professor, IIT Roorkee
Deep LearningMachine LearningComputer VisionArtificial Intelligence