NOVO: Unlearning-Compliant Vision Transformers

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses selective machine unlearning for Vision Transformers (ViTs), proposing a fine-tuning-free, built-in unlearning architecture. To tackle the problem, the method explicitly models the unlearning process during training: it introduces learnable keys and a key revocation mechanism to render model outputs for targeted classes irreversibly invalid; it further employs a batch partitioning strategy—using proxy unlearn and retain sets—to optimize the model such that logits for unlearned classes become unpredictable. This is the first approach to embed unlearning capability directly into the ViT architecture, enabling instantaneous, secure, and irreversible information erasure. Evaluated across multiple datasets and ViT variants, the method significantly outperforms existing fine-tuning–based and fine-tuning–free unlearning baselines: membership inference attack success rates drop by over 40%, while retain-set accuracy remains nearly intact (fluctuation < 0.3%).

Technology Category

Application Category

📝 Abstract
Machine unlearning (MUL) refers to the problem of making a pre-trained model selectively forget some training instances or class(es) while retaining performance on the remaining dataset. Existing MUL research involves fine-tuning using a forget and/or retain set, making it expensive and/or impractical, and often causing performance degradation in the unlearned model. We introduce {pname}, an unlearning-aware vision transformer-based architecture that can directly perform unlearning for future unlearning requests without any fine-tuning over the requested set. The proposed model is trained by simulating unlearning during the training process itself. It involves randomly separating class(es)/sub-class(es) present in each mini-batch into two disjoint sets: a proxy forget-set and a retain-set, and the model is optimized so that it is unable to predict the forget-set. Forgetting is achieved by withdrawing keys, making unlearning on-the-fly and avoiding performance degradation. The model is trained jointly with learnable keys and original weights, ensuring withholding a key irreversibly erases information, validated by membership inference attack scores. Extensive experiments on various datasets, architectures, and resolutions confirm {pname}'s superiority over both fine-tuning-free and fine-tuning-based methods.
Problem

Research questions and friction points this paper is trying to address.

Enables selective forgetting in vision transformers without fine-tuning
Prevents performance degradation during machine unlearning process
Achieves on-the-fly unlearning by withdrawing learnable keys
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unlearning-aware vision transformer architecture
Simulated unlearning during training process
Forgetting by withdrawing keys on-the-fly
🔎 Similar Papers
No similar papers found.
Soumya Roy
Soumya Roy
Product Management, Salesforce
Multi-CloudNetworksAI
S
Soumya Banerjee
IIT Kanpur, Kanpur, India
Vinay Verma
Vinay Verma
Amazon India, Bangalore, India
S
Soumik Dasgupta
Walmart Labs, Bangalore, India
D
Deepak Gupta
Amazon India, Bangalore, India
Piyush Rai
Piyush Rai
IIT Kanpur
machine learningprobabilistic machine learningartificial intelligence