🤖 AI Summary
Existing evaluations of external memory systems predominantly rely on static setups, failing to capture the dynamic interplay of streaming memory updates, insertions, and retrievals characteristic of real-world scenarios, thereby yielding misleading performance assessments. This work proposes Neuromem—the first fine-grained, lifecycle-decomposed framework tailored for streaming external memory—which disentangles memory mechanisms into five orthogonal dimensions: data structure, normalization, integration strategy, query construction, and context fusion, and implements a unified benchmarking platform supporting modular component substitution. Experiments across LOCOMO, LONGMEMEVAL, and MEMORYAGENTBENCH reveal that scaling memory size generally degrades performance, time-sensitive queries pose the greatest challenge, the choice of memory data structure primarily dictates performance ceilings, and aggressive compression or generative fusion techniques largely shift costs between insertion and retrieval without substantially improving accuracy.
📝 Abstract
Most evaluations of External Memory Module assume a static setting: memory is built offline and queried at a fixed state. In practice, memory is streaming: new facts arrive continuously, insertions interleave with retrievals, and the memory state evolves while the model is serving queries. In this regime, accuracy and cost are governed by the full memory lifecycle, which encompasses the ingestion, maintenance, retrieval, and integration of information into generation. We present Neuromem, a scalable testbed that benchmarks External Memory Modules under an interleaved insertion-and-retrieval protocol and decomposes its lifecycle into five dimensions including memory data structure, normalization strategy, consolidation policy, query formulation strategy, and context integration mechanism. Using three representative datasets LOCOMO, LONGMEMEVAL, and MEMORYAGENTBENCH, Neuromem evaluates interchangeable variants within a shared serving stack, reporting token-level F1 and insertion/retrieval latency. Overall, we observe that performance typically degrades as memory grows across rounds, and time-related queries remain the most challenging category. The memory data structure largely determines the attainable quality frontier, while aggressive compression and generative integration mechanisms mostly shift cost between insertion and retrieval with limited accuracy gain.