ACU: Analytic Continual Unlearning for Efficient and Exact Forgetting with Privacy Preservation

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In continual learning (CL), privacy-sensitive continual unlearning (CU) requires precise, sequential removal of specific knowledge without access to historical data—yet existing methods violate CL constraints by relying on retraining or gradient-based optimization, compromising both efficiency and model fidelity. Method: We propose the first gradient-free, analytical continual unlearning framework, leveraging closed-form least-squares solutions and recursive parameter updates to achieve single-step, interpretable, data-free exact unlearning. Contribution/Results: We theoretically prove its exact unlearning capability, model fidelity preservation, and invertibility for linear models. Experiments show that our method achieves a 3.2× speedup in inference over SOTA approaches, reduces unlearning error by 91%, and demonstrates strong robustness under high-frequency unlearning requests—while strictly adhering to CL constraints.

Technology Category

Application Category

📝 Abstract
The development of artificial intelligence demands that models incrementally update knowledge by Continual Learning (CL) to adapt to open-world environments. To meet privacy and security requirements, Continual Unlearning (CU) emerges as an important problem, aiming to sequentially forget particular knowledge acquired during the CL phase. However, existing unlearning methods primarily focus on single-shot joint forgetting and face significant limitations when applied to CU. First, most existing methods require access to the retained dataset for re-training or fine-tuning, violating the inherent constraint in CL that historical data cannot be revisited. Second, these methods often suffer from a poor trade-off between system efficiency and model fidelity, making them vulnerable to being overwhelmed or degraded by adversaries through deliberately frequent requests. In this paper, we identify that the limitations of existing unlearning methods stem fundamentally from their reliance on gradient-based updates. To bridge the research gap at its root, we propose a novel gradient-free method for CU, named Analytic Continual Unlearning (ACU), for efficient and exact forgetting with historical data privacy preservation. In response to each unlearning request, our ACU recursively derives an analytical (i.e., closed-form) solution in an interpretable manner using the least squares method. Theoretical and experimental evaluations validate the superiority of our ACU on unlearning effectiveness, model fidelity, and system efficiency.
Problem

Research questions and friction points this paper is trying to address.

Exact forgetting in Continual Unlearning with privacy preservation
Efficient gradient-free method for sequential knowledge removal
Overcoming limitations of gradient-based unlearning in CL scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-free method for continual unlearning
Analytical solution using least squares
Preserves historical data privacy