🤖 AI Summary
Machine unlearning in over-parameterized models (e.g., neural networks) remains fundamentally limited—existing approaches achieve only approximate forgetting in output space, failing to guarantee exact parameter-space equivalence.
Method: This paper establishes, for the first time, a theoretical proof that exact parameter-level unlearning is achievable in over-parameterized linear models via data relabeling. Building on this insight, we propose an alternating optimization framework that jointly optimizes relabeling and unlearning objectives, and extend it to nonlinear networks using random feature analysis, SGD dynamics modeling, and over-parameterization theory.
Contribution/Results: Our method significantly outperforms state-of-the-art unlearning approaches—especially relabeling-based ones—across diverse benchmarks. Crucially, it provides the first empirical validation of exact parameter-space unlearning in practical neural networks, demonstrating both feasibility and effectiveness.
📝 Abstract
Machine unlearning (MU) is to make a well-trained model behave as if it had never been trained on specific data. In today's over-parameterized models, dominated by neural networks, a common approach is to manually relabel data and fine-tune the well-trained model. It can approximate the MU model in the output space, but the question remains whether it can achieve exact MU, i.e., in the parameter space. We answer this question by employing random feature techniques to construct an analytical framework. Under the premise of model optimization via stochastic gradient descent, we theoretically demonstrated that over-parameterized linear models can achieve exact MU through relabeling specific data. We also extend this work to real-world nonlinear networks and propose an alternating optimization algorithm that unifies the tasks of unlearning and relabeling. The algorithm's effectiveness, confirmed through numerical experiments, highlights its superior performance in unlearning across various scenarios compared to current state-of-the-art methods, particularly excelling over similar relabeling-based MU approaches.