Modular Memory is the Key to Continual Learning Agents

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current foundation models exhibit significant limitations in continual learning, experience accumulation, and personalization, with conventional weight-update-based approaches particularly prone to catastrophic forgetting. This work proposes a novel architecture centered on modular memory that systematically integrates in-weight learning (IWL) and in-context learning (ICL) for the first time: ICL enables rapid acquisition of new knowledge, while IWL provides stable, long-term capability enhancement, and modular memory serves as a synergistic bridge between the two. The resulting framework offers an innovative and practical pathway toward building agents capable of lifelong evolution, continuous adaptation, and personalized interaction.

Technology Category

Application Category

📝 Abstract
Foundation models have transformed machine learning through large-scale pretraining and increased test-time compute. Despite surpassing human performance in several domains, these models remain fundamentally limited in continuous operation, experience accumulation, and personalization, capabilities that are central to adaptive intelligence. While continual learning research has long targeted these goals, its historical focus on in-weight learning (IWL), i.e., updating a single model's parameters to absorb new knowledge, has rendered catastrophic forgetting a persistent challenge. Our position is that combining the strengths of In-Weight Learning (IWL) and the newly emerged capabilities of In-Context Learning (ICL) through the design of modular memory is the missing piece for continual adaptation at scale. We outline a conceptual framework for modular memory-centric architectures that leverage ICL for rapid adaptation and knowledge accumulation, and IWL for stable updates to model capabilities, charting a practical roadmap toward continually learning agents.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Catastrophic Forgetting
In-Weight Learning
In-Context Learning
Modular Memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular Memory
Continual Learning
In-Context Learning
In-Weight Learning
Adaptive Intelligence
🔎 Similar Papers
No similar papers found.
V
Vaggelis Dorovatas
Toyota Motor Europe
M
Malte Schwerin
University of Bremen
Andrew D. Bagdanov
Andrew D. Bagdanov
Associate Professor, University of Florence, Italy
Computer visiondeep learningartificial intelligencedeep reinforcement learningimage processing
Lucas Caccia
Lucas Caccia
Microsoft Research
Deep LearningContinual LearningNatural Language Processing
Antonio Carta
Antonio Carta
Assistant Professor @ Università di Pisa
continual learninglifelong learningdeep learningrecurrent neural networks
Laurent Charlin
Laurent Charlin
Associate Professor, HEC Montréal & Mila, Canada CIFAR AI Chair
Machine LearningArtificial Intelligence
Barbara Hammer
Barbara Hammer
Professor, Bielefeld University
machine learningdata miningneural networksbioinformaticstheoretical computer science
Tyler L. Hayes
Tyler L. Hayes
Georgia Tech
Artificial IntelligenceMachine LearningComputer VisionLifelong Machine Learning
T
Timm Hess
KU Leuven
Christopher Kanan
Christopher Kanan
University of Rochester
Artificial IntelligenceDeep LearningAGIMulti-Modal AICognitive Science
Dhireesha Kudithipudi
Dhireesha Kudithipudi
Founding Director of MATRIX AI Consortium, Robert F McDermott Endowed Chair, UTSA
Neuro-Inspired AIEnergy Efficient AI/MLNeuromorphic ComputingAI Accelerators
X
Xialei Liu
Nankai University
Vincenzo Lomonaco
Vincenzo Lomonaco
Associate Professor @ LUISS | Co-Founder @ ContinualAI.org & ContinualIST.ai
Artificial IntelligenceDeep LearningContinual LearningMulti-Agent SystemsAgentic AI
J
Jorge Mendez-Mendez
Stony Brook University
D
Darshan Patil
HEC Montreal, Mila–Quebec AI Institute, Canada CIFAR AI Chair
Ameya Prabhu
Ameya Prabhu
Tübingen AI Center, University of Tübingen
Data-Centric MLScience of BenchmarkingContinual LearningEconomics of Transformative AI
Elisa Ricci
Elisa Ricci
University of Trento & Fondazione Bruno Kessler
Computer VisionDeep LearningRobotics
Tinne Tuytelaars
Tinne Tuytelaars
KU Leuven - PSI, Belgium
computer visioncontinual learning
Gido M. van de Ven
Gido M. van de Ven
University of Groningen
continual learningreplaydeep learningneurosciencegenerative models
Liyuan Wang
Liyuan Wang
Tsinghua University
bio-inspired learningcontinual learningneuroscience
Joost van de Weijer
Joost van de Weijer
Computer Vision Center, Universitat Autònoma de Barcelona
Computer VisionDeep LearningContinual Learning
Jonghyun Choi
Jonghyun Choi
Associate Professor, Electrical and Computer Engineering, Seoul National University
Computer VisionMachine LearningContinual LearningEmbodied AIArtificial Intelligence
Martin Mundt
Martin Mundt
Professor for Lifelong Machine Learning at University of Bremen
deep learninglifelong machine learningcontinual learning
Rahaf Aljundi
Rahaf Aljundi
Senior Researcher at Toyota Motor Europe
Machine learningComputer vision