PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized learning, efficiently and privately recovering a globally poisoned model—without full retraining—remains challenging. Method: This paper proposes a lightweight, privacy-preserving model repair framework comprising three stages: client-side preprocessing, periodic curvature (Hessian) updates, and final precise correction. It uniquely integrates secret sharing with the linear approximation property of the Hessian to enable secure reuse of historical models, while incorporating approximate Hessian estimation, machine unlearning, and federated historical aggregation. Contribution/Results: The method guarantees strict local parameter confidentiality—no raw or intermediate parameters are exposed—and achieves recovery accuracy comparable to full retraining. Empirical evaluation shows over 70% reduction in computational overhead, robust convergence, and compliance with strong privacy guarantees (e.g., formal differential privacy compatibility).

Technology Category

Application Category

📝 Abstract
Decentralized learning is vulnerable to poison attacks, where malicious clients manipulate local updates to degrade global model performance. Existing defenses mainly detect and filter malicious models, aiming to prevent a limited number of attackers from corrupting the global model. However, restoring an already compromised global model remains a challenge. A direct approach is to remove malicious clients and retrain the model using only the benign clients. Yet, retraining is time-consuming, computationally expensive, and may compromise model consistency and privacy. We propose PDLRecover, a novel method to recover a poisoned global model efficiently by leveraging historical model information while preserving privacy. The main challenge lies in protecting shared historical models while enabling parameter estimation for model recovery. By exploiting the linearity of approximate Hessian matrix computation, we apply secret sharing to protect historical updates, ensuring local models are not leaked during transmission or reconstruction. PDLRecover introduces client-side preparation, periodic recovery updates, and a final exact update to ensure robustness and convergence of the recovered model. Periodic updates maintain accurate curvature information, and the final step ensures high-quality convergence. Experiments show that the recovered global model achieves performance comparable to a fully retrained model but with significantly reduced computation and time cost. Moreover, PDLRecover effectively prevents leakage of local model parameters, ensuring both accuracy and privacy in recovery.
Problem

Research questions and friction points this paper is trying to address.

Recover poisoned global model efficiently without retraining
Protect historical model privacy during parameter estimation
Maintain model accuracy and convergence with reduced computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages historical model information for recovery
Uses secret sharing to protect historical updates
Combines periodic and final exact updates
🔎 Similar Papers
No similar papers found.
Xiangman Li
Xiangman Li
Electrical and Computer Engineering, Queen's University
X
Xiaodong Wu
Department of Electrical and Computer Engineering and Ingenuity Labs Research Institute, Queen’s University, Kingston, Ontario, Canada K7L 3N6
Jianbing Ni
Jianbing Ni
Queen's University
AI Safety and SecurityCloud-Edge SecurityMobile Network SecurityBlockchain Technology
M
Mohamed Mahmoud
Department of Electrical and Computer Engineering, Tennessee Tech. University, Cookeville, TN 38505, USA
Maazen Alsabaan
Maazen Alsabaan
King Saud University
Computer and Electrical Engineering