Locomo-Plus: Beyond-Factual Cognitive Memory Evaluation Framework for LLM Agents

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in current evaluation methodologies for large language models, which predominantly focus on factual recall and fail to assess the models’ ability to retain and consistently apply implicit cognitive constraints—such as user states, goals, or values—over extended dialogues. To bridge this gap, we introduce LoCoMo-Plus, the first benchmark specifically designed to evaluate implicit cognitive memory through scenarios where contextual cues and trigger phrases are semantically disentangled, thereby testing long-term constraint retention and behavioral consistency. We develop a unified evaluation framework grounded in constraint consistency and conduct comprehensive experiments across diverse backbone models, retrieval-augmented architectures, and dedicated memory systems. Our findings reveal that state-of-the-art models exhibit significant deficiencies in these tasks, and LoCoMo-Plus effectively uncovers failure modes invisible to conventional metrics and explicit prompting strategies.

Technology Category

Application Category

📝 Abstract
Long-term conversational memory is a core capability for LLM-based dialogue systems, yet existing benchmarks and evaluation protocols primarily focus on surface-level factual recall. In realistic interactions, appropriate responses often depend on implicit constraints such as user state, goals, or values that are not explicitly queried later. To evaluate this setting, we introduce \textbf{LoCoMo-Plus}, a benchmark for assessing cognitive memory under cue--trigger semantic disconnect, where models must retain and apply latent constraints across long conversational contexts. We further show that conventional string-matching metrics and explicit task-type prompting are misaligned with such scenarios, and propose a unified evaluation framework based on constraint consistency. Experiments across diverse backbone models, retrieval-based methods, and memory systems demonstrate that cognitive memory remains challenging and reveals failures not captured by existing benchmarks. Our code and evaluation framework are publicly available at: https://github.com/xjtuleeyf/Locomo-Plus.
Problem

Research questions and friction points this paper is trying to address.

cognitive memory
long-term conversational memory
implicit constraints
semantic disconnect
LLM agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

cognitive memory
constraint consistency
semantic disconnect
long-term dialogue
LLM evaluation
🔎 Similar Papers
No similar papers found.