Module-Aware Parameter-Efficient Machine Unlearning on Transformers

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient unlearning methods neglect the structural characteristics of Transformer modules, hindering precise identification of critical parameters and thus limiting unlearning performance. To address this, we propose MAPE-Unlearn, a module-aware parameter-efficient unlearning framework. It introduces, for the first time, a module-aware mechanism that employs learnable dual masks—separately applied to attention heads and feed-forward filters—optimized via warm-started greedy search to accurately locate and freeze task-critical parameters. The objective function is explicitly driven by the unlearning target. Extensive experiments across multiple architectures (BERT, RoBERTa, ViT) and benchmarks (SST-2, AG News, ImageNet) demonstrate that MAPE-Unlearn significantly outperforms state-of-the-art parameter-efficient unlearning methods, achieving superior unlearning accuracy, enhanced model robustness, and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Transformer has become fundamental to a vast series of pre-trained large models that have achieved remarkable success across diverse applications. Machine unlearning, which focuses on efficiently removing specific data influences to comply with privacy regulations, shows promise in restricting updates to influence-critical parameters. However, existing parameter-efficient unlearning methods are largely devised in a module-oblivious manner, which tends to inaccurately identify these parameters and leads to inferior unlearning performance for Transformers. In this paper, we propose { t MAPE-Unlearn}, a module-aware parameter-efficient machine unlearning approach that uses a learnable pair of masks to pinpoint influence-critical parameters in the heads and filters of Transformers. The learning objective of these masks is derived by desiderata of unlearning and optimized through an efficient algorithm featured by a greedy search with a warm start. Extensive experiments on various Transformer models and datasets demonstrate the effectiveness and robustness of { t MAPE-Unlearn} for unlearning.
Problem

Research questions and friction points this paper is trying to address.

Identifying influence-critical parameters in Transformers for unlearning
Improving unlearning performance with module-aware parameter efficiency
Removing specific data influences to comply with privacy regulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Module-aware parameter-efficient unlearning for Transformers
Learnable masks pinpoint critical heads and filters
Greedy search algorithm optimizes unlearning objectives
🔎 Similar Papers
No similar papers found.
W
Wenjie Bao
The state Key Laboratory of Blockchain and Data Security, Zhejiang University
J
Jian Lou
The state Key Laboratory of Blockchain and Data Security, Zhejiang University
Yuke Hu
Yuke Hu
Zhejiang University
Data PrivacyTrustworthy LLMDifferential PrivacyMachine Unlearning
X
Xiaochen Li
UNC Greensboro
Z
Zhihao Liu
The state Key Laboratory of Blockchain and Data Security, Zhejiang University
J
Jiaqi Liu
The state Key Laboratory of Blockchain and Data Security, Zhejiang University
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security