🤖 AI Summary
Current large language models (LLMs) lack human-like capabilities for continual memory injection and on-demand activation; existing approaches rely heavily on extended context windows or external retrieval (e.g., RAG), failing to model incremental learning of everyday event-based knowledge. Method: We propose MEGa, the first framework that directly encodes episodic memories into model weights via gated low-rank adapters—mimicking the brain’s complementary memory systems—and integrates memory embedding matching, query-driven weight activation, and continual learning training. Contribution/Results: Evaluated on fictional character and Wikipedia event datasets, MEGa significantly mitigates catastrophic forgetting and outperforms RAG and other baselines in both memory recall and related question answering. It overcomes fundamental limitations of external memory and long-context paradigms, enabling efficient, weight-integrated, and adaptive episodic memory.
📝 Abstract
Large Language Models (LLMs) currently struggle to sequentially add new memories and integrate new knowledge. These limitations contrast with the human ability to continuously learn from new experiences and acquire knowledge throughout life. Most existing approaches add memories either through large context windows or external memory buffers (e.g., Retrieval-Augmented Generation), and studies on knowledge injection rarely test scenarios resembling everyday life events. In this work, we introduce a continual learning framework, Memory Embedded in Gated LLMs (MEGa), which injects event memories directly into the weights of LLMs. Each memory is stored in a dedicated set of gated low-rank weights. During inference, a gating mechanism activates relevant memory weights by matching query embeddings to stored memory embeddings. This enables the model to both recall entire memories and answer related questions. On two datasets - fictional characters and Wikipedia events - MEGa outperforms baseline approaches in mitigating catastrophic forgetting. Our model draws inspiration from the complementary memory system of the human brain.