General Agentic Memory Via Deep Research

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI agents employ static memory mechanisms that suffer from runtime context loss, limiting their capability in complex tasks. Method: This paper proposes the General Agent Memory (GAM) framework, the first to introduce just-in-time (JIT) compilation principles into memory management. GAM features a dual-module architecture—lightweight memory units and a global page store—enabling dynamic, runtime-optimized context construction. It integrates online intelligent retrieval, offline lightweight storage, and end-to-end reinforcement learning for memory optimization. Contribution/Results: Experiments demonstrate that GAM significantly outperforms state-of-the-art methods across diverse memory-intensive tasks: average task completion rate improves by 18.7%, while context relevance and information completeness are simultaneously enhanced. Moreover, GAM substantially improves the test-time scalability of large language models.

Technology Category

Application Category

📝 Abstract
Memory is critical for AI agents, yet the widely-adopted static memory, aiming to create readily available memory in advance, is inevitably subject to severe information loss. To address this limitation, we propose a novel framework called extbf{general agentic memory (GAM)}. GAM follows the principle of " extbf{just-in time (JIT) compilation}" where it focuses on creating optimized contexts for its client at runtime while keeping only simple but useful memory during the offline stage. To this end, GAM employs a duo-design with the following components. 1) extbf{Memorizer}, which highlights key historical information using a lightweight memory, while maintaining complete historical information within a universal page-store. 2) extbf{Researcher}, which retrieves and integrates useful information from the page-store for its online request guided by the pre-constructed memory. This design allows GAM to effectively leverage the agentic capabilities and test-time scalability of frontier large language models (LLMs), while also facilitating end-to-end performance optimization through reinforcement learning. In our experimental study, we demonstrate that GAM achieves substantial improvement on various memory-grounded task completion scenarios against existing memory systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing severe information loss in static AI memory systems
Creating optimized contexts for AI agents at runtime
Enhancing memory-grounded task completion with dynamic retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

General Agentic Memory uses just-in-time compilation principle
Memorizer component maintains lightweight memory with page-store
Researcher retrieves and integrates information for online requests
🔎 Similar Papers
No similar papers found.
B
B. Y. Yan
Beijing Academy of Artificial Intelligence
Chaofan Li
Chaofan Li
Beijing University of Posts and Telecommunications
NLP
Hongjin Qian
Hongjin Qian
Peking University
LLMIRNLP
S
Shuqi Lu
Beijing Academy of Artificial Intelligence
Z
Zheng Liu
Beijing Academy of Artificial Intelligence, Hong Kong Polytechnic University