🤖 AI Summary
This work addresses the high computational cost and poor scalability of Hessian matrix computation in machine unlearning, particularly for large-scale models. We propose an efficient unlearning method grounded in an incremental learning perspective. Our core innovation is the first formal modeling of machine unlearning as an incremental learning problem, enabling gradient-based optimization and influence approximation to circumvent explicit Hessian computation and inversion—thereby preserving deletion efficacy while drastically improving efficiency. The method establishes a theoretical connection between memorization and unlearning, supporting sample-level precise influence estimation and parameter adjustment. Extensive experiments across multiple datasets and model architectures demonstrate that our approach significantly outperforms state-of-the-art baselines—including influence-function-based methods—in three critical dimensions: deletion guarantees, computational efficiency, and post-unlearning model performance retention.
📝 Abstract
Due to growing privacy concerns, machine unlearning, which aims at enabling machine learning models to ``forget" specific training data, has received increasing attention. Among existing methods, influence-based unlearning has emerged as a prominent approach due to its ability to estimate the impact of individual training samples on model parameters without retraining. However, this approach suffers from prohibitive computational overhead arising from the necessity to compute the Hessian matrix and its inverse across all training samples and parameters, rendering it impractical for large-scale models and scenarios involving frequent data deletion requests. This highlights the difficulty of forgetting. Inspired by cognitive science, which suggests that memorizing is easier than forgetting, this paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning). This connection allows machine unlearning to be addressed from the perspective of incremental learning. Unlike the time-consuming Hessian computations in unlearning (forgetting), incremental learning (memorizing) typically relies on more efficient gradient optimization, which supports the aforementioned cognitive theory. Based on this connection, we introduce the Influence Approximation Unlearning (IAU) algorithm for efficient machine unlearning from the incremental perspective. Extensive empirical evaluations demonstrate that IAU achieves a superior balance among removal guarantee, unlearning efficiency, and comparable model utility, while outperforming state-of-the-art methods across diverse datasets and model architectures. Our code is available at https://github.com/Lolo1222/IAU.