Continual Learning via Sparse Memory Finetuning

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in continual learning of large language models (LLMs), this paper proposes Sparse Memory Fine-tuning (SMFT). SMFT builds upon a memory-layer architecture and selectively updates only those sparse memory slots that exhibit high activation for the new task, leveraging pretraining-phase activation statistics to enable precise slot selection—thereby substantially mitigating interference between old and new knowledge caused by parameter sharing. Experiments on two question-answering benchmarks demonstrate that, while matching the new-task adaptation performance of full fine-tuning and LoRA, SMFT reduces forgetting rates from 89% and 71% to just 11%, respectively. The core contribution lies in introducing an activation-driven sparse update mechanism into continual learning, effectively balancing knowledge plasticity and stability without compromising either.

Technology Category

Application Category

📝 Abstract
Modern language models are powerful, but typically static after deployment. A major obstacle to building models that continually learn over time is catastrophic forgetting, where updating on new data erases previously acquired capabilities. Motivated by the intuition that mitigating forgetting is challenging because trainable parameters are shared across all tasks, we investigate whether sparse parameter updates can enable learning without catastrophic forgetting. We introduce sparse memory finetuning, leveraging memory layer models (Berges et al., 2024), which are sparsely updated by design. By updating only the memory slots that are highly activated by a new piece of knowledge relative to usage on pretraining data, we reduce interference between new knowledge and the model's existing capabilities. We evaluate learning and forgetting compared to full finetuning and parameter-efficient finetuning with LoRA on two question answering tasks. We find that sparse memory finetuning learns new knowledge while exhibiting substantially less forgetting: while NaturalQuestions F1 drops by 89% after full finetuning on new facts and 71% with LoRA, sparse memory finetuning yields only an 11% drop with the same level of new knowledge acquisition. Our results suggest sparsity in memory layers offers a promising path toward continual learning in large language models.
Problem

Research questions and friction points this paper is trying to address.

Mitigating catastrophic forgetting in continual learning models
Investigating sparse parameter updates to reduce interference
Enabling new knowledge acquisition while preserving existing capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse parameter updates mitigate catastrophic forgetting
Memory layers selectively update highly activated slots
Sparsity enables new knowledge acquisition with minimal forgetting
🔎 Similar Papers
No similar papers found.