🤖 AI Summary
This work addresses a critical limitation in existing agent memory evaluation methods, which decouple memory from action and thus fail to capture how memory guides decision-making in real-world scenarios. To bridge this gap, the authors propose MemoryArena—a unified benchmark framework for evaluating agent memory across multiple sessions. MemoryArena introduces human-designed inter-task dependent subtasks—such as web navigation, preference-constrained planning, progressive retrieval, and sequential reasoning—that explicitly couple memory acquisition with action selection, establishing a novel paradigm for assessing memory in multi-session, task-dependent settings. Experimental results reveal that even agents excelling on current long-context benchmarks suffer significant performance degradation within MemoryArena, exposing a key blind spot in contemporary memory evaluation approaches.
📝 Abstract
Existing evaluations of agents with memory typically assess memorization and action in isolation. One class of benchmarks evaluates memorization by testing recall of past conversations or text but fails to capture how memory is used to guide future decisions. Another class focuses on agents acting in single-session tasks without the need for long-term memory. However, in realistic settings, memorization and action are tightly coupled: agents acquire memory while interacting with the environment, and subsequently rely on that memory to solve future tasks. To capture this setting, we introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops. The benchmark consists of human-crafted agentic tasks with explicitly interdependent subtasks, where agents must learn from earlier actions and feedback by distilling experiences into memory, and subsequently use that memory to guide later actions to solve the overall task. MemoryArena supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning, and reveals that agents with near-saturated performance on existing long-context memory benchmarks like LoCoMo perform poorly in our agentic setting, exposing a gap in current evaluations for agents with memory.