🤖 AI Summary
This work addresses the limitation of existing memory evaluation benchmarks for large language models, which are predominantly confined to static dialogue scenarios and fail to assess memory capabilities in long-term, goal-evolving project-based interactions. To bridge this gap, we propose RealMem—the first memory-driven interactive benchmark grounded in realistic project contexts—encompassing over 2,000 cross-session dialogues across 11 task categories and introducing a novel evaluation paradigm tailored to long-term project evolution. We develop a dynamic memory evolution simulation system by integrating project construction, multi-agent dialogue generation, and memory scheduling. Experimental results reveal that current models exhibit significant deficiencies in tracking long-term project states and dynamic contextual dependencies, thereby offering clear guidance for future memory architecture design.
📝 Abstract
As Large Language Models (LLMs) evolve from static dialogue interfaces to autonomous general agents, effective memory is paramount to ensuring long-term consistency. However, existing benchmarks primarily focus on casual conversation or task-oriented dialogue, failing to capture **"long-term project-oriented"** interactions where agents must track evolving goals. To bridge this gap, we introduce **RealMem**, the first benchmark grounded in realistic project scenarios. RealMem comprises over 2,000 cross-session dialogues across eleven scenarios, utilizing natural user queries for evaluation. We propose a synthesis pipeline that integrates Project Foundation Construction, Multi-Agent Dialogue Generation, and Memory and Schedule Management to simulate the dynamic evolution of memory. Experiments reveal that current memory systems face significant challenges in managing the long-term project states and dynamic context dependencies inherent in real-world projects. Our code and datasets are available at [https://github.com/AvatarMemory/RealMemBench](https://github.com/AvatarMemory/RealMemBench).