Rethinking Memory Mechanisms of Foundation Agents in the Second Half: A Survey

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by foundational agents in long-horizon, dynamic, and user-dependent environments—particularly context explosion and sustained information management—which necessitate efficient memory mechanisms to enhance practical utility. The paper proposes the first unified three-dimensional framework that integrates internal and external memory, five cognitive mechanisms, and a dual-agent/user-centric perspective, offering a systematic structuring of research on agent memory. Drawing on a comprehensive review of hundreds of studies published before 2026, the framework synthesizes insights from memory modeling, cognitive science, and agent architecture to clarify memory operation strategies, evaluation benchmarks, and learning methodologies. It not only delineates structured pathways in existing research but also identifies key open problems, thereby providing a theoretical foundation and directional guidance for the future design of intelligent agent memory systems.

Technology Category

Application Category

📝 Abstract
The research of artificial intelligence is undergoing a paradigm shift from prioritizing model innovations over benchmark scores towards emphasizing problem definition and rigorous real-world evaluation. As the field enters the"second half,"the central challenge becomes real utility in long-horizon, dynamic, and user-dependent environments, where agents face context explosion and must continuously accumulate, manage, and selectively reuse large volumes of information across extended interactions. Memory, with hundreds of papers released this year, therefore emerges as the critical solution to fill the utility gap. In this survey, we provide a unified view of foundation agent memory along three dimensions: memory substrate (internal and external), cognitive mechanism (episodic, semantic, sensory, working, and procedural), and memory subject (agent- and user-centric). We then analyze how memory is instantiated and operated under different agent topologies and highlight learning policies over memory operations. Finally, we review evaluation benchmarks and metrics for assessing memory utility, and outline various open challenges and future directions.
Problem

Research questions and friction points this paper is trying to address.

foundation agents
memory mechanisms
long-horizon environments
context explosion
real-world utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory mechanisms
foundation agents
cognitive architecture
long-horizon reasoning
agent memory evaluation
🔎 Similar Papers
No similar papers found.
Wei-Chieh Huang
Wei-Chieh Huang
University of Illinois Chicago
Natural language processing
Weizhi Zhang
Weizhi Zhang
University of Illinois Chicago
PersonalizationLarge Language ModelsAgents
Yueqing Liang
Yueqing Liang
Illinois Institute of Technology
PersonalizationRecommender SystemLLMMultimodalityMachine Learning Fairness
Y
Yuanchen Bei
UIUC
Yankai Chen
Yankai Chen
Postdoctoral Associate, Cornell University
Information RetrievalKnowledge MiningLarge Language ModelsAgentic AI
Tao Feng
Tao Feng
PhD of UIUC
LLMGNNRLHF
Xinyu Pan
Xinyu Pan
Department of Information Engineering, The Chinese University of Hong Kong
Machine LearningComputer Vision
Zhen Tan
Zhen Tan
Ph.D. at Arizona State University
Data MiningMachine LearningAI for ScienceUser-centric ExplanationResponsible AI
Yu Wang
Yu Wang
PhD student, University of California, San Diego
Natural Language ProcessingMulti Modality
Tianxin Wei
Tianxin Wei
University of Illinois Urbana Champaign
Trustworthy Machine LearningLLMInformation Retrieval
S
Shanglin Wu
Emory
R
Ruiyao Xu
Northwestern
Liangwei Yang
Liangwei Yang
Salesforce Research
Network ScienceRecommender SystemEfficient Modeling
Rui Yang
Rui Yang
University of Illinois Urbana-Champaign
Reinforcement LearningLarge Language ModelAgent
Wooseong Yang
Wooseong Yang
University of Illinois at Chicago
Recommender SystemsLarge Language Models
C
Chin-Yuan Yeh
UIC
H
Hanrong Zhang
UIC
Haozhen Zhang
Haozhen Zhang
Nanyang Technological University
Data MiningGraph Neural NetworksLarge Language Models
S
Siqi Zhu
UIUC
Henry Peng Zou
Henry Peng Zou
University of Illinois Chicago
AgentsLarge Language ModelsMultimodal LearningNatural Language Processing
Wanjia Zhao
Wanjia Zhao
Stanford University
Machine Learning
Song Wang
Song Wang
Assistant Professor, University of Central Florida
Efficient and Safe AIComputational Biology
Wujiang Xu
Wujiang Xu
Rutgers, Meta, Ant Group
Agentic AILLM Agents
Zixuan Ke
Zixuan Ke
Salesforce AI Research
Large Language ModelContinual LearningNatural Language Processing
Zheng Hui
Zheng Hui
University of Cambridge
Natural Language ProcessingLLM Saftey & AlignmentMultimodal