Towards Robust Knowledge Removal in Federated Learning with High Data Heterogeneity

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In highly data-heterogeneous federated learning settings, efficiently and compliantly removing a client’s knowledge—especially under privacy regulations such as GDPR—remains challenging due to reliance on multi-round communication and iterative coordination. Method: This paper proposes a one-step model contribution withdrawal method that uniquely integrates Task Arithmetic with Neural Tangent Kernel (NTK) theory to enable local, single-shot knowledge erasure without global retraining or repeated server-client interaction. Contribution/Results: The method achieves “one-click” contribution revocation while preserving global model performance nearly unchanged. Removal latency is reduced from multiple communication rounds to a single local computation, significantly enhancing system availability and real-time responsiveness. Extensive experiments demonstrate high accuracy and robustness under severe non-IID data distributions. The approach provides a verifiable, low-overhead forgetting guarantee for privacy-sensitive federated learning, satisfying regulatory requirements for data subject rights.

Technology Category

Application Category

📝 Abstract
Nowdays, there are an abundance of portable devices capable of collecting large amounts of data and with decent computational power. This opened the possibility to train AI models in a distributed manner, preserving the participating clients' privacy. However, because of privacy regulations and safety requirements, elimination upon necessity of a client contribution to the model has become mandatory. The cleansing process must satisfy specific efficacy and time requirements. In recent years, research efforts have produced several knowledge removal methods, but these require multiple communication rounds between the data holders and the process coordinator. This can cause the unavailability of an effective model up to the end of the removal process, which can result in a disservice to the system users. In this paper, we introduce an innovative solution based on Task Arithmetic and the Neural Tangent Kernel, to rapidly remove a client's influence from a model.
Problem

Research questions and friction points this paper is trying to address.

Removing client knowledge from federated learning models
Achieving rapid knowledge removal with minimal communication rounds
Addressing data heterogeneity challenges in federated unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Task Arithmetic for client influence removal
Applies Neural Tangent Kernel for rapid unlearning
Enables efficient knowledge removal in federated learning
🔎 Similar Papers
No similar papers found.
R
Riccardo Santi
AImageLab, University of Modena and Reggio Emilia, Italy
R
Riccardo Salami
AImageLab, University of Modena and Reggio Emilia, Italy
Simone Calderara
Simone Calderara
University of Modena and Reggio Emilia
Machine learningcontinual learningtrackingpattern recognition