🤖 AI Summary
To address representational drift and catastrophic forgetting caused by task evolution in continual learning, this paper proposes a replay-free global alignment framework. The method introduces, for the first time, a cross-task global prototype mechanism that unifies multi-task representation spaces and decouples local prototype drift. It further designs a self-supervised regularization objective by integrating masked language modeling with neighborhood-aware attention to suppress detrimental representational drift. Crucially, the approach requires no historical data storage and operates solely via fine-tuning a pre-trained language model. Evaluated on multiple NLP continual learning benchmarks, it achieves an average accuracy improvement of 7.2%, significantly mitigating knowledge forgetting induced by gradient interference. The framework establishes a novel paradigm for efficient and scalable continual language understanding.
📝 Abstract
Continual learning (CL) aims to learn a sequence of tasks over time, with data distributions shifting from one task to another. When training on new task data, data representations from old tasks may drift. Some negative representation drift can result in catastrophic forgetting, by causing the locally learned class prototypes and data representations to correlate poorly across tasks. To mitigate such representation drift, we propose a method that finds global prototypes to guide the learning, and learns data representations with the regularization of the self-supervised information. Specifically, for NLP tasks, we formulate each task in a masked language modeling style, and learn the task via a neighbor attention mechanism over a pre-trained language model. Experimental results show that our proposed method can learn fairly consistent representations with less representation drift, and significantly reduce catastrophic forgetting in CL without resampling data from past tasks.