Orthogonal Soft Pruning for Efficient Class Unlearning

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for machine unlearning—specifically, selectively erasing knowledge of target classes from pretrained models—struggle to simultaneously achieve fast deletion, high accuracy retention for remaining classes, and strong privacy guarantees. Method: We propose a class-aware soft-pruning framework based on orthogonal convolutional kernel regularization. It identifies class-specific channels via activation difference analysis, then enforces filter decorrelation and channel-level soft pruning to enable millisecond-scale precise unlearning. Results: Evaluated on CIFAR-10, CIFAR-100, and TinyImageNet, our method achieves complete forgetting of target classes while degrading accuracy on retained classes by less than 0.5%. It accelerates unlearning by 2–3 orders of magnitude over state-of-the-art approaches and significantly mitigates membership inference attacks—thereby satisfying real-time, regulatory-compliant, and robust privacy requirements in ML-as-a-Service (MLaaS) deployments.

Technology Category

Application Category

📝 Abstract
Machine unlearning aims to selectively remove class-specific knowledge from pretrained neural networks to satisfy privacy regulations such as the GDPR. Existing methods typically face a trade-off between unlearning speed and preservation of predictive accuracy, often incurring either high computational overhead or significant performance degradation on retained classes. In this paper, we propose a novel class-aware soft pruning framework leveraging orthogonal convolutional kernel regularization to achieve rapid and precise forgetting with millisecond-level response times. By enforcing orthogonality constraints during training, our method decorrelates convolutional filters and disentangles feature representations, while efficiently identifying class-specific channels through activation difference analysis. Extensive evaluations across multiple architectures and datasets demonstrate stable pruning with near-instant execution, complete forgetting of targeted classes, and minimal accuracy loss on retained data. Experiments on CIFAR-10, CIFAR-100, and TinyImageNet confirm that our approach substantially reduces membership inference attack risks and accelerates unlearning by orders of magnitude compared to state-of-the-art baselines. This framework provides an efficient, practical solution for real-time machine unlearning in Machine Learning as a Service (MLaaS) scenarios.
Problem

Research questions and friction points this paper is trying to address.

Selectively remove class-specific knowledge from neural networks
Balance unlearning speed and predictive accuracy preservation
Reduce computational overhead and performance degradation risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal convolutional kernel regularization
Class-aware soft pruning framework
Activation difference analysis
🔎 Similar Papers
No similar papers found.
Q
Qinghui Gong
School of Information Science and Technology, Southwest Jiaotong University
X
Xue Yang
School of Information Science and Technology, Southwest Jiaotong University
Xiaohu Tang
Xiaohu Tang
Southwest Jiaotong Universoty
CodingInformation Security