🤖 AI Summary
This work addresses the issue of “over-personalization” in memory-augmented dialogue systems, where excessive use of user information often leads to unnatural, repetitive, or overly flattering responses. The study formally defines three types of over-personalization—incongruity, repetitiveness, and sycophancy—and introduces OP-Bench, the first dedicated evaluation benchmark comprising 1,700 annotated instances. To mitigate this problem, the authors propose Self-ReCheck, a lightweight, model-agnostic mechanism that employs self-reflective memory filtering to suppress redundant or inappropriate personalization cues. Experimental results demonstrate that prevailing large language models are prone to over-personalization, while integration of Self-ReCheck significantly alleviates these issues without compromising the system’s ability to deliver appropriately personalized responses.
📝 Abstract
Memory-augmented conversational agents enable personalized interactions using long-term user memory and have gained substantial traction. However, existing benchmarks primarily focus on whether agents can recall and apply user information, while overlooking whether such personalization is used appropriately. In fact, agents may overuse personal information, producing responses that feel forced, intrusive, or socially inappropriate to users. We refer to this issue as \emph{over-personalization}. In this work, we formalize over-personalization into three types: Irrelevance, Repetition, and Sycophancy, and introduce \textbf{OP-Bench} a benchmark of 1,700 verified instances constructed from long-horizon dialogue histories. Using \textbf{OP-Bench}, we evaluate multiple large language models and memory-augmentation methods, and find that over-personalization is widespread when memory is introduced. Further analysis reveals that agents tend to retrieve and over-attend to user memories even when unnecessary. To address this issue, we propose \textbf{Self-ReCheck}, a lightweight, model-agnostic memory filtering mechanism that mitigates over-personalization while preserving personalization performance. Our work takes an initial step toward more controllable and appropriate personalization in memory-augmented dialogue systems.