Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of large language models’ ability to continuously understand and adhere to users’ complex preferences in long-term, real-world scenarios. To this end, we introduce RealPref, a personalized benchmark that, for the first time, integrates explicit to implicit preference expressions, extended interaction histories, and diverse question-answering formats. Built upon 100 user personas, 1,300 fine-grained preferences, and simulated long-sequence dialogues, RealPref employs an LLM-as-a-judge automated evaluation methodology. Experimental results reveal a significant performance degradation as context length increases and preferences become more implicit, alongside limited generalization to unseen scenarios, thereby exposing critical bottlenecks in current models’ capacity for sustained personalized interaction.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly serving as personal assistants, where users share complex and diverse preferences over extended interactions. However, assessing how well LLMs can follow these preferences in realistic, long-term situations remains underexplored. This work proposes RealPref, a benchmark for evaluating realistic preference-following in personalized user-LLM interactions. RealPref features 100 user profiles, 1300 personalized preferences, four types of preference expression (ranging from explicit to implicit), and long-horizon interaction histories. It includes three types of test questions (multiple-choice, true-or-false, and open-ended), with detailed rubrics for LLM-as-a-judge evaluation. Results indicate that LLM performance significantly drops as context length grows and preference expression becomes more implicit, and that generalizing user preference understanding to unseen scenarios poses further challenges. RealPref and these findings provide a foundation for future research to develop user-aware LLM assistants that better adapt to individual needs. The code is available at https://github.com/GG14127/RealPref.
Problem

Research questions and friction points this paper is trying to address.

personalization
preference following
long-horizon interaction
user-LLM interaction
realistic evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalized LLM
long-horizon interaction
preference following
realistic benchmark
implicit preference
🔎 Similar Papers
No similar papers found.