Continual Learning for Encoder-only Language Models via a Discrete Key-Value Bottleneck

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in small encoder-only language models under continual learning, this paper proposes a lightweight continual learning method based on a discrete key-value bottleneck. The method localizes parameter updates to decouple knowledge across tasks, significantly reducing training and inference overhead in encoder-only architectures. Key contributions include: (i) the first adaptation of discrete key-value memory mechanisms to NLP continual learning; (ii) a task-agnostic discrete key initialization strategy; and (iii) a bottleneck architecture variant tailored to language modeling characteristics. Extensive experiments across four standard NLP continual learning benchmarks demonstrate that the approach matches state-of-the-art methods in mitigating forgetting and preserving performance on previously learned tasks, while reducing average parameter updates by 62% and inference latency by 38%.

Technology Category

Application Category

📝 Abstract
Continual learning remains challenging across various natural language understanding tasks. When models are updated with new training data, they risk catastrophic forgetting of prior knowledge. In the present work, we introduce a discrete key-value bottleneck for encoder-only language models, allowing for efficient continual learning by requiring only localized updates. Inspired by the success of a discrete key-value bottleneck in vision, we address new and NLP-specific challenges. We experiment with different bottleneck architectures to find the most suitable variants regarding language, and present a generic discrete key initialization technique for NLP that is task independent. We evaluate the discrete key-value bottleneck in four continual learning NLP scenarios and demonstrate that it alleviates catastrophic forgetting. We showcase that it offers competitive performance to other popular continual learning methods, with lower computational costs.
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in continual learning for NLP
Enable efficient updates via discrete key-value bottleneck
Maintain performance without task ID in single-head scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete key-value bottleneck for NLP
Task-independent key initialization technique
Efficient localized updates prevent forgetting
🔎 Similar Papers
No similar papers found.