Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training

๐Ÿ“… 2023-06-03
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the false sense of data privacy induced by unlearnable examplesโ€”samples perturbed to cause premature overfitting to spurious features while suppressing semantic learning. We propose a progressive, multi-stage training paradigm grounded in our novel observation that models initially learn both perturbation and semantic features, yet shallow layers rapidly overfit to perturbations. To break this unlearnability bottleneck, we design a dynamic hierarchical freezing/unfreezing mechanism. Our method integrates progressive layered network training, adaptive parameter scheduling, and multi-stage loss formulation, and is compatible with mainstream architectures including CNNs, ResNets, and Vision Transformers (ViTs). Extensive experiments on CIFAR-10/100 and ImageNet-mini demonstrate substantial improvements over existing defenses, establishing our approach as a new benchmark for evaluating unlearnability mitigation techniques.
๐Ÿ“ Abstract
Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing. These unlearnable samples effectively misguide model training to learn perturbation features but ignore image semantic features. We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage, but rapidly go to the overfitting stage since the shallow layers tend to overfit on perturbation features and make models fall into overfitting quickly. Based on the observations, we propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature and provides a reliable baseline for further evaluation of unlearnable techniques.
Problem

Research questions and friction points this paper is trying to address.

Unlearnable examples mislead models to learn perturbations
Shallow layers trap models in harmful perturbation learning
Progressive Staged Training breaks unlearnable examples effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates unlearnable examples via imperceptible perturbations
Proposes Progressive Staged Training framework
Prevents models from learning perturbation features
๐Ÿ”Ž Similar Papers
No similar papers found.
Pucheng Dang
Pucheng Dang
University of Chinese Academy of Sciences
Privacy ProtectionDNN security
X
Xingui Hu
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Kaidi Xu
Kaidi Xu
Associate Professor, City University of Hong Kong
AI SecurityUncertainty QuantificationFormal Verification
Jinhao Duan
Jinhao Duan
Postdoc@UNC-Chapel Hill, Ph.D.@Drexel University
AI4ScienceTrustworthy MLGenerative AI
D
Di Huang
University of Chinese Academy of Sciences, China; State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Husheng Han
Husheng Han
Institute of Computing Technology, Chinese Academy of Sciences
Computer architectureSecurityDNNDomain-Specific Accelerator
R
Rui Zhang
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Z
Zidong Du
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Q
Qi Guo
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Yunji Chen
Yunji Chen
Institute of Computing Technology, Chinese Academy of Sciences
processor architecturemicroarchitecturemachine learning