π€ AI Summary
This work addresses the limitations of large language model agents in long-horizon reasoning, which stem from constrained context windows and the disjointed treatment of short- and long-term memory in existing approaches, lacking unified optimization and adaptability. To overcome these challenges, we propose the Agentic Memory (AgeMem) framework, which, for the first time, formulates memory management as learnable tool-augmented actions within the agentβs policy. This enables autonomous storage, retrieval, updating, and discarding of memories through a tool-based mechanism. We further introduce a three-stage progressive reinforcement learning curriculum combined with a stepwise GRPO algorithm to effectively mitigate sparse reward issues. Experimental results demonstrate that AgeMem significantly outperforms strong baselines across five long-horizon task benchmarks, achieving higher task completion rates, improved memory quality, and more efficient context utilization.
π Abstract
Large language model (LLM) agents face fundamental limitations in long-horizon reasoning due to finite context windows, making effective memory management critical. Existing methods typically handle long-term memory (LTM) and short-term memory (STM) as separate components, relying on heuristics or auxiliary controllers, which limits adaptability and end-to-end optimization. In this paper, we propose Agentic Memory (AgeMem), a unified framework that integrates LTM and STM management directly into the agent's policy. AgeMem exposes memory operations as tool-based actions, enabling the LLM agent to autonomously decide what and when to store, retrieve, update, summarize, or discard information. To train such unified behaviors, we propose a three-stage progressive reinforcement learning strategy and design a step-wise GRPO to address sparse and discontinuous rewards induced by memory operations. Experiments on five long-horizon benchmarks demonstrate that AgeMem consistently outperforms strong memory-augmented baselines across multiple LLM backbones, achieving improved task performance, higher-quality long-term memory, and more efficient context usage.