How to Protect Models against Adversarial Unlearning?

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses adversarial unlearning—a novel security threat wherein malicious actors submit forged unlearning requests to significantly degrade model performance—and provides the first systematic analysis of its underlying mechanisms and key influencing factors. We propose a general-purpose defense framework that integrates model-architecture-aware data selection constraints, real-time performance monitoring during unlearning, and adaptive parameter correction. The framework simultaneously satisfies legitimate unlearning requirements and mitigates adversarial attacks, without relying on specific model architectures or training paradigms, thus ensuring practicality and broad applicability. Experimental results across diverse attack settings demonstrate that our method reduces the average accuracy drop by 62%, substantially enhancing unlearning security and robustness. This work establishes both theoretical foundations and practical methodologies for trustworthy machine learning unloading mechanisms.

Technology Category

Application Category

📝 Abstract
AI models need to be unlearned to fulfill the requirements of legal acts such as the AI Act or GDPR, and also because of the need to remove toxic content, debiasing, the impact of malicious instances, or changes in the data distribution structure in which a model works. Unfortunately, removing knowledge may cause undesirable side effects, such as a deterioration in model performance. In this paper, we investigate the problem of adversarial unlearning, where a malicious party intentionally sends unlearn requests to deteriorate the model's performance maximally. We show that this phenomenon and the adversary's capabilities depend on many factors, primarily on the backbone model itself and strategy/limitations in selecting data to be unlearned. The main result of this work is a new method of protecting model performance from these side effects, both in the case of unlearned behavior resulting from spontaneous processes and adversary actions.
Problem

Research questions and friction points this paper is trying to address.

Protect AI models from adversarial unlearning attacks
Mitigate performance deterioration from unlearning requests
Address legal and ethical unlearning requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Protects models from adversarial unlearning attacks
Mitigates performance decline from unlearning requests
Adapts to spontaneous and adversarial unlearning scenarios
🔎 Similar Papers
2024-06-22International Conference on Computational LinguisticsCitations: 4
P
Patryk Jasiorski
IDENTT Corporation
Marek Klonowski
Marek Klonowski
Wrocław University of Science and Technology
PrivacySecurityAIDistributed computing
M
Michał Woźniak
Wrocław University of Science and Technology