🤖 AI Summary
In highly data-heterogeneous federated learning settings, efficiently and compliantly removing a client’s knowledge—especially under privacy regulations such as GDPR—remains challenging due to reliance on multi-round communication and iterative coordination.
Method: This paper proposes a one-step model contribution withdrawal method that uniquely integrates Task Arithmetic with Neural Tangent Kernel (NTK) theory to enable local, single-shot knowledge erasure without global retraining or repeated server-client interaction.
Contribution/Results: The method achieves “one-click” contribution revocation while preserving global model performance nearly unchanged. Removal latency is reduced from multiple communication rounds to a single local computation, significantly enhancing system availability and real-time responsiveness. Extensive experiments demonstrate high accuracy and robustness under severe non-IID data distributions. The approach provides a verifiable, low-overhead forgetting guarantee for privacy-sensitive federated learning, satisfying regulatory requirements for data subject rights.
📝 Abstract
Nowdays, there are an abundance of portable devices capable of collecting large amounts of data and with decent computational power. This opened the possibility to train AI models in a distributed manner, preserving the participating clients' privacy. However, because of privacy regulations and safety requirements, elimination upon necessity of a client contribution to the model has become mandatory. The cleansing process must satisfy specific efficacy and time requirements. In recent years, research efforts have produced several knowledge removal methods, but these require multiple communication rounds between the data holders and the process coordinator. This can cause the unavailability of an effective model up to the end of the removal process, which can result in a disservice to the system users. In this paper, we introduce an innovative solution based on Task Arithmetic and the Neural Tangent Kernel, to rapidly remove a client's influence from a model.