Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning

📅 2024-01-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of implementing the “right to be forgotten” in machine learning—particularly for non-convex models, sequential or batch unlearning requests, and rigorous privacy guarantees—this paper proposes the Langevin Unlearning Framework. Our method unifies differentially private training with certified unlearning by integrating noisy stochastic gradient descent and Langevin dynamics, enabling provably private approximate certified unlearning. It supports non-convex optimization, balancing unlearning accuracy and model utility. Computationally, it achieves up to数十-fold speedup over full retraining for batch unlearning. Empirical evaluation on standard benchmarks demonstrates a three-way trade-off: effective unlearning (verified via membership inference and influence analysis), formal ε-differential privacy, and competitive generalization performance.

Technology Category

Application Category

📝 Abstract
Machine unlearning has raised significant interest with the adoption of laws ensuring the ``right to be forgotten''. Researchers have provided a probabilistic notion of approximate unlearning under a similar definition of Differential Privacy (DP), where privacy is defined as statistical indistinguishability to retraining from scratch. We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems. Langevin unlearning unifies the DP learning process and the privacy-certified unlearning process with many algorithmic benefits. These include approximate certified unlearning for non-convex problems, complexity saving compared to retraining, sequential and batch unlearning for multiple unlearning requests.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Right to be Forgotten
Privacy and Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Langevin Unlearning
Differential Privacy
Noisy Gradient Descent
Eli Chien
Eli Chien
Visiting Researcher, Google
Regulatable AIMachine UnlearningDifferential PrivacyGraph machine learning
H
Haoyu Wang
Department of Electrical and Computer Engineering, Georgia Institute of Technology, Georgia, U.S.A.
Z
Ziang Chen
Department of Mathematics, Massachusetts Institute of Technology, Massachusetts, U.S.A.
P
Pan Li
Department of Electrical and Computer Engineering, Georgia Institute of Technology, Georgia, U.S.A.