🤖 AI Summary
Existing text embedding evaluation benchmarks struggle to assess model performance on memory retrieval tasks that are fragmented, context-dependent, and span long temporal horizons. This work proposes LMEB, the first comprehensive benchmark specifically designed for long-term memory retrieval, systematically defining and constructing four memory categories—episodic, conversational, semantic, and procedural—aggregating 22 datasets and 193 zero-shot tasks through a combination of AI-generated and human-annotated data. The authors evaluate 15 mainstream embedding models and find that conventional retrieval performance is orthogonal to long-term memory capability, that model scale does not necessarily correlate with retrieval effectiveness, and that no single model currently excels across all memory types, thereby addressing a critical gap in the evaluation landscape.
📝 Abstract
Memory embeddings are crucial for memory-augmented systems, such as OpenClaw, but their evaluation is underexplored in current text embedding benchmarks, which narrowly focus on traditional passage retrieval and fail to assess models' ability to handle long-horizon memory retrieval tasks involving fragmented, context-dependent, and temporally distant information. To address this, we introduce the Long-horizon Memory Embedding Benchmark (LMEB), a comprehensive framework that evaluates embedding models' capabilities in handling complex, long-horizon memory retrieval tasks. LMEB spans 22 datasets and 193 zero-shot retrieval tasks across 4 memory types: episodic, dialogue, semantic, and procedural, with both AI-generated and human-annotated data. These memory types differ in terms of level of abstraction and temporal dependency, capturing distinct aspects of memory retrieval that reflect the diverse challenges of the real world. We evaluate 15 widely used embedding models, ranging from hundreds of millions to ten billion parameters. The results reveal that (1) LMEB provides a reasonable level of difficulty; (2) Larger models do not always perform better; (3) LMEB and MTEB exhibit orthogonality. This suggests that the field has yet to converge on a universal model capable of excelling across all memory retrieval tasks, and that performance in traditional passage retrieval may not generalize to long-horizon memory retrieval. In summary, by providing a standardized and reproducible evaluation framework, LMEB fills a crucial gap in memory embedding evaluation, driving further advancements in text embedding for handling long-term, context-dependent memory retrieval. LMEB is available at https://github.com/KaLM-Embedding/LMEB.