A Unified Framework for Continual Learning and Unlearning

📅 2024-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental tension between continual learning and machine unlearning—where models tend to catastrophically forget old knowledge when acquiring new knowledge, and struggle to precisely erase specific data instances. We propose the first unified framework jointly modeling both tasks. Methodologically, we introduce a controlled knowledge distillation mechanism to dynamically balance knowledge acquisition and selective forgetting, coupled with a fixed-size memory buffer to preserve long-term knowledge stability. A dual-objective optimization strategy is further designed to jointly enhance performance on both fronts. Extensive evaluation on standard benchmarks demonstrates state-of-the-art results: continual learning average accuracy improves by 3.2%, while machine unlearning success exceeds 98.5% with zero post-unlearning performance degradation—marking the first simultaneous SOTA achievement for both tasks.

Technology Category

Application Category

📝 Abstract
Continual learning and machine unlearning are crucial challenges in machine learning, typically addressed separately. Continual learning focuses on adapting to new knowledge while preserving past information, whereas unlearning involves selectively forgetting specific subsets of data. In this paper, we introduce a new framework that jointly tackles both tasks by leveraging controlled knowledge distillation. Our approach enables efficient learning with minimal forgetting and effective targeted unlearning. By incorporating a fixed memory buffer, the system supports learning new concepts while retaining prior knowledge. The distillation process is carefully managed to ensure a balance between acquiring new information and forgetting specific data as needed. Experimental results on benchmark datasets show that our method matches or exceeds the performance of existing approaches in both continual learning and machine unlearning. This unified framework is the first to address both challenges simultaneously, paving the way for adaptable models capable of dynamic learning and forgetting while maintaining strong overall performance. Source code: extcolor{blue}{https://respailab.github.io/CLMUL}
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Machine Forgetting
Selective Forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous Learning
Machine Forgetting
Balanced Knowledge Transfer
🔎 Similar Papers
No similar papers found.
R
Romit Chatterjee
RespAI Lab, KIIT Bhubaneswar
V
Vikram S Chundawat
Sagepilot AI
A
Ayush K Tarun
EPFL
Ankur Mali
Ankur Mali
Assistant Professor, University of South Florida
Formal languageMemory NetworksPredictive CodingNatural Language Processinglifelong machine
M
Murari Mandal
RespAI Lab, KIIT Bhubaneswar