🤖 AI Summary
Data duplication poses a systemic threat to machine unlearning, as maliciously replicated samples significantly degrade the efficacy of mainstream unlearning strategies—e.g., retraining—causing an average 37.2% drop in model performance.
Method: We propose the first unified unlearning attack framework targeting duplicated and near-duplicate samples across standard supervised, federated, and reinforcement learning paradigms. We introduce three novel near-duplicate generation methods operating at semantic, structural, and strategic levels—bypassing conventional deduplication detection. Our evaluation employs adversarial unlearning assessment and deduplication-robustness testing.
Contribution/Results: We empirically demonstrate that duplicated data evade existing detection mechanisms and induce substantial residual effects post-unlearning. Furthermore, we establish a cross-paradigm quantitative model characterizing the impact of duplication on unlearning fidelity. This work delivers critical theoretical insights and technical benchmarks for controllable, trustworthy unlearning in AI systems.
📝 Abstract
Duplication is a prevalent issue within datasets. Existing research has demonstrated that the presence of duplicated data in training datasets can significantly influence both model performance and data privacy. However, the impact of data duplication on the unlearning process remains largely unexplored. This paper addresses this gap by pioneering a comprehensive investigation into the role of data duplication, not only in standard machine unlearning but also in federated and reinforcement unlearning paradigms. Specifically, we propose an adversary who duplicates a subset of the target model's training set and incorporates it into the training set. After training, the adversary requests the model owner to unlearn this duplicated subset, and analyzes the impact on the unlearned model. For example, the adversary can challenge the model owner by revealing that, despite efforts to unlearn it, the influence of the duplicated subset remains in the model. Moreover, to circumvent detection by de-duplication techniques, we propose three novel near-duplication methods for the adversary, each tailored to a specific unlearning paradigm. We then examine their impacts on the unlearning process when de-duplication techniques are applied. Our findings reveal several crucial insights: 1) the gold standard unlearning method, retraining from scratch, fails to effectively conduct unlearning under certain conditions; 2) unlearning duplicated data can lead to significant model degradation in specific scenarios; and 3) meticulously crafted duplicates can evade detection by de-duplication methods.