🤖 AI Summary
Existing LLM agents lack dynamic memory accumulation, retrieval, and updating capabilities over continuous task streams, hindering long-horizon planning and experience reuse.
Method: We introduce the first memory evaluation benchmark for streaming tasks, featuring a dynamic self-evolving memory assessment framework. It incorporates ExpRAG—an experience reuse mechanism enabling cross-task knowledge transfer—and ReMem, a reasoning-action-memory co-optimization pipeline supporting test-time memory evolution. The benchmark integrates over ten memory modules, covering both multi-turn goal-oriented and single-turn reasoning scenarios.
Contribution/Results: Evaluated across ten diverse datasets, our framework significantly improves historical experience utilization efficiency and cross-task generalization. It establishes a novel, evaluable, and optimizable paradigm for continual learning and state retention in LLM agents, advancing memory-aware agent design.
📝 Abstract
Statefulness is essential for large language model (LLM) agents to perform long-term planning and problem-solving. This makes memory a critical component, yet its management and evolution remain largely underexplored. Existing evaluations mostly focus on static conversational settings, where memory is passively retrieved from dialogue to answer queries, overlooking the dynamic ability to accumulate and reuse experience across evolving task streams. In real-world environments such as interactive problem assistants or embodied agents, LLMs are required to handle continuous task streams, yet often fail to learn from accumulated interactions, losing valuable contextual insights, a limitation that calls for test-time evolution, where LLMs retrieve, integrate, and update memory continuously during deployment. To bridge this gap, we introduce Evo-Memory, a comprehensive streaming benchmark and framework for evaluating self-evolving memory in LLM agents. Evo-Memory structures datasets into sequential task streams, requiring LLMs to search, adapt, and evolve memory after each interaction. We unify and implement over ten representative memory modules and evaluate them across 10 diverse multi-turn goal-oriented and single-turn reasoning and QA datasets. To better benchmark experience reuse, we provide a baseline method, ExpRAG, for retrieving and utilizing prior experience, and further propose ReMem, an action-think-memory refine pipeline that tightly integrates reasoning, task actions, and memory updates to achieve continual improvement.