The AI Hippocampus: How Far are We From Human Memory?

📅 2026-01-14
🏛️ Trans. Mach. Learn. Res.
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the construction of human-like memory mechanisms in large language models and multimodal foundation models to support continual learning, personalized reasoning, and cross-modal consistency. It proposes the first unified taxonomic framework that systematically integrates three major memory paradigms: implicit memory (e.g., parameterized memory), explicit memory (e.g., external retrieval and graph-structured knowledge bases), and agent memory. The framework is further extended to multimodal settings, elucidating the critical role of memory in cross-modal alignment and agent collaboration. Through a comprehensive literature review and taxonomic analysis, the study surveys existing technical approaches, evaluation benchmarks, and core challenges, thereby establishing a theoretical foundation and offering clear directions for future research on human-like memory systems in artificial intelligence.

Technology Category

Application Category

📝 Abstract
Memory plays a foundational role in augmenting the reasoning, adaptability, and contextual fidelity of modern Large Language Models and Multi-Modal LLMs. As these models transition from static predictors to interactive systems capable of continual learning and personalized inference, the incorporation of memory mechanisms has emerged as a central theme in their architectural and functional evolution. This survey presents a comprehensive and structured synthesis of memory in LLMs and MLLMs, organizing the literature into a cohesive taxonomy comprising implicit, explicit, and agentic memory paradigms. Specifically, the survey delineates three primary memory frameworks. Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers, encompassing their capacity for memorization, associative retrieval, and contextual reasoning. Recent work has explored methods to interpret, manipulate, and reconfigure this latent memory. Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations, such as textual corpora, dense vectors, and graph-based structures, thereby enabling scalable and updatable interaction with information sources. Agentic memory introduces persistent, temporally extended memory structures within autonomous agents, facilitating long-term planning, self-consistency, and collaborative behavior in multi-agent systems, with relevance to embodied and interactive AI. Extending beyond text, the survey examines the integration of memory within multi-modal settings, where coherence across vision, language, audio, and action modalities is essential. Key architectural advances, benchmark tasks, and open challenges are discussed, including issues related to memory capacity, alignment, factual consistency, and cross-system interoperability.
Problem

Research questions and friction points this paper is trying to address.

memory
Large Language Models
Multi-Modal LLMs
continual learning
autonomous agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory taxonomy
implicit memory
explicit memory
agentic memory
multi-modal memory
🔎 Similar Papers
No similar papers found.
Zixia Jia
Zixia Jia
BigAI
NLP
Jiaqi Li
Jiaqi Li
Beijing Institute for General Artificial Intelligence (bigai.com)
Yipeng Kang
Yipeng Kang
BIGAI
Natural language processing
Yuxuan Wang
Yuxuan Wang
Peking University
Omni-LMMultimodal Agent
Tong Wu
Tong Wu
BIGAI, Tsinghua University
Text GenerationDiffusion Language Model
Q
Quansen Wang
State Key Laboratory of General Artificial Intelligence, BIGAI Peking University
Xiaobo Wang
Xiaobo Wang
University of Science and Technology of China
Natural Language Processing
Shuyi Zhang
Shuyi Zhang
East China Normal University
Big data analysisSemi-supervised learningHigh-dimensional statisticsApplied data science
J
Junzhe Shen
State Key Laboratory of General Artificial Intelligence, BIGAI Peking University
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI Peking University
Siyuan Qi
Siyuan Qi
Gyges Labs
Machine LearningComputer Vision
Yitao Liang
Yitao Liang
Peking University
Machine LearningAI ReasoningAI Agent
Di He
Di He
Peking University
Machine Learning
Z
Zilong Zheng
State Key Laboratory of General Artificial Intelligence, BIGAI Peking University
S
Song-Chun Zhu
State Key Laboratory of General Artificial Intelligence, BIGAI Peking University