PersistBench: When Should Long-Term Memories Be Forgotten by LLMs?

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While integrating long-term memory into large language models can enhance personalized dialogue capabilities, it introduces novel security risks such as cross-domain information leakage and memory-induced sycophancy. This work formally defines and quantifies these memory-specific threats for the first time and presents PersistBench, the first benchmark framework dedicated to evaluating their safety implications. Through systematic prompt construction, multi-scenario dialogue simulation, and quantitative failure rate analysis, the study evaluates 18 mainstream large language models. Results reveal median failure rates of 53% and 97% in cross-domain leakage and sycophancy scenarios, respectively, exposing critical vulnerabilities in the current generation of models regarding long-term memory safety.

Technology Category

Application Category

📝 Abstract
Conversational assistants are increasingly integrating long-term memory with large language models (LLMs). This persistence of memories, e.g., the user is vegetarian, can enhance personalization in future conversations. However, the same persistence can also introduce safety risks that have been largely overlooked. Hence, we introduce PersistBench to measure the extent of these safety risks. We identify two long-term memory-specific risks: cross-domain leakage, where LLMs inappropriately inject context from the long-term memories; and memory-induced sycophancy, where stored long-term memories insidiously reinforce user biases. We evaluate 18 frontier and open-source LLMs on our benchmark. Our results reveal a surprisingly high failure rate across these LLMs - a median failure rate of 53% on cross-domain samples and 97% on sycophancy samples. To address this, our benchmark encourages the development of more robust and safer long-term memory usage in frontier conversational systems.
Problem

Research questions and friction points this paper is trying to address.

long-term memory
safety risks
cross-domain leakage
memory-induced sycophancy
conversational assistants
Innovation

Methods, ideas, or system contributions that make the work stand out.

PersistBench
long-term memory
cross-domain leakage
memory-induced sycophancy
LLM safety
🔎 Similar Papers
No similar papers found.
Sidharth Pulipaka
Sidharth Pulipaka
AI4Bharat
AI Alignment
O
Oliver Chen
Supervised Program for Alignment Research (SPAR), Fall 2025
M
Manas Sharma
Supervised Program for Alignment Research (SPAR), Fall 2025
T
Taaha S Bajwa
Supervised Program for Alignment Research (SPAR), Fall 2025
Vyas Raina
Vyas Raina
University of Cambridge
Machine LearningDeep Learning
Ivaxi Sheth
Ivaxi Sheth
PhD student, CISPA-Helmholtz
Machine LearningCausalityLLMsExplainabilitySafety