🤖 AI Summary
This work addresses privacy-compliant continual learning, tackling the core challenge of *how to accurately forget knowledge of specific tasks—thereby satisfying the “right to be forgotten”—while continually acquiring new tasks, avoiding catastrophic forgetting, enabling forward knowledge transfer, and minimizing memory overhead*. We propose the first unified framework integrating continual learning with machine unlearning, featuring a task-level precise unloading mechanism: it employs task-specific sparse subnetworks for parameter isolation and sharing, augmented by lightweight episodic memory replay to jointly ensure privacy preservation, learning stability, and computational efficiency. Evaluated on multiple image classification benchmarks, our method significantly outperforms existing approaches, achieving state-of-the-art privacy-aware continual learning performance. It is the first to enable verifiable, controllable, and low-overhead task-level knowledge addition and deletion within a single neural network.
📝 Abstract
Lifelong learning algorithms enable models to incrementally acquire new knowledge without forgetting previously learned information. Contrarily, the field of machine unlearning focuses on explicitly forgetting certain previous knowledge from pretrained models when requested, in order to comply with data privacy regulations on the right-to-be-forgotten. Enabling efficient lifelong learning with the capability to selectively unlearn sensitive information from models presents a critical and largely unaddressed challenge with contradicting objectives. We address this problem from the perspective of simultaneously preventing catastrophic forgetting and allowing forward knowledge transfer during task-incremental learning, while ensuring exact task unlearning and minimizing memory requirements, based on a single neural network model to be adapted. Our proposed solution, privacy-aware lifelong learning (PALL), involves optimization of task-specific sparse subnetworks with parameter sharing within a single architecture. We additionally utilize an episodic memory rehearsal mechanism to facilitate exact unlearning without performance degradations. We empirically demonstrate the scalability of PALL across various architectures in image classification, and provide a state-of-the-art solution that uniquely integrates lifelong learning and privacy-aware unlearning mechanisms for responsible AI applications.