🤖 AI Summary
Existing AI agents employ static memory mechanisms that suffer from runtime context loss, limiting their capability in complex tasks. Method: This paper proposes the General Agent Memory (GAM) framework, the first to introduce just-in-time (JIT) compilation principles into memory management. GAM features a dual-module architecture—lightweight memory units and a global page store—enabling dynamic, runtime-optimized context construction. It integrates online intelligent retrieval, offline lightweight storage, and end-to-end reinforcement learning for memory optimization. Contribution/Results: Experiments demonstrate that GAM significantly outperforms state-of-the-art methods across diverse memory-intensive tasks: average task completion rate improves by 18.7%, while context relevance and information completeness are simultaneously enhanced. Moreover, GAM substantially improves the test-time scalability of large language models.
📝 Abstract
Memory is critical for AI agents, yet the widely-adopted static memory, aiming to create readily available memory in advance, is inevitably subject to severe information loss. To address this limitation, we propose a novel framework called extbf{general agentic memory (GAM)}. GAM follows the principle of " extbf{just-in time (JIT) compilation}" where it focuses on creating optimized contexts for its client at runtime while keeping only simple but useful memory during the offline stage. To this end, GAM employs a duo-design with the following components. 1) extbf{Memorizer}, which highlights key historical information using a lightweight memory, while maintaining complete historical information within a universal page-store. 2) extbf{Researcher}, which retrieves and integrates useful information from the page-store for its online request guided by the pre-constructed memory. This design allows GAM to effectively leverage the agentic capabilities and test-time scalability of frontier large language models (LLMs), while also facilitating end-to-end performance optimization through reinforcement learning. In our experimental study, we demonstrate that GAM achieves substantial improvement on various memory-grounded task completion scenarios against existing memory systems.