🤖 AI Summary
Current evaluation methods conflate user preferences with irrelevant dialogue content and fail to account for the dynamic evolution and cumulative nature of preferences across multi-turn interactions, thereby inadequately measuring true personalized memory capabilities. This work proposes the first temporal evaluation framework that integrates event relevance, temporal dynamics, and linguistic personalization. It establishes an event-driven, cross-session, multi-domain interaction benchmark that simulates real-world user noise and individual stylistic variation through textual variability and language alignment. The framework enables joint assessment of token efficiency and preference extraction accuracy. Experiments demonstrate that contextually relevant interactions enhance preference extraction accuracy while reducing token consumption; however, existing systems still struggle to maintain persona consistency over long temporal spans and under cross-domain interference.
📝 Abstract
Empowering large language models with long-term memory is crucial for building agents that adapt to users' evolving needs. However, prior evaluations typically interleave preference-related dialogues with irrelevant conversations, reducing the task to needle-in-a-haystack retrieval while ignoring relationships between events that drive the evolution of user preferences. Such settings overlook a fundamental characteristic of real-world personalization: preferences emerge gradually and accumulate across interactions within noisy contexts. To bridge this gap, we introduce PERMA, a benchmark designed to evaluate persona consistency over time beyond static preference recall. Additionally, we incorporate (1) text variability and (2) linguistic alignment to simulate erratic user inputs and individual idiolects in real-world data. PERMA consists of temporally ordered interaction events spanning multiple sessions and domains, with preference-related queries inserted over time. We design both multiple-choice and interactive tasks to probe the model's understanding of persona along the interaction timeline. Experiments demonstrate that by linking related interactions, advanced memory systems can extract more precise preferences and reduce token consumption, outperforming traditional semantic retrieval of raw dialogues. Nevertheless, they still struggle to maintain a coherent persona across temporal depth and cross-domain interference, highlighting the need for more robust personalized memory management in agents. Our code and data are open-sourced at https://github.com/PolarisLiu1/PERMA.