🤖 AI Summary
This paper addresses the fundamental challenge that large language models (LLMs) struggle to infer, retain, and adhere to users’ explicit and implicit preferences over long multi-turn dialogues. To this end, we introduce PrefEval—the first long-context, preference-centric benchmark for multi-turn dialogue, comprising 20 thematic domains and 3,000 preference-query pairs. We propose a dual-task evaluation framework (generation + classification) to systematically quantify LLMs’ preference adherence capability for the first time, revealing that state-of-the-art models suffer a sharp accuracy drop—below 10%—beyond ten turns. Comprehensive evaluation across zero-shot prompting, retrieval-augmented generation (RAG), iterative feedback, and supervised fine-tuning demonstrates that fine-tuning on PrefEval significantly improves performance. We publicly release the dataset, code, and evaluation protocol to establish foundational resources for personalized dialogue research.
📝 Abstract
Large Language Models (LLMs) are increasingly used as chatbots, yet their ability to personalize responses to user preferences remains limited. We introduce PrefEval, a benchmark for evaluating LLMs' ability to infer, memorize and adhere to user preferences in a long-context conversational setting. PrefEval comprises 3,000 manually curated user preference and query pairs spanning 20 topics. PrefEval contains user personalization or preference information in both explicit and implicit forms, and evaluates LLM performance using a generation and a classification task. With PrefEval, we evaluated the aforementioned preference following capabilities of 10 open-source and proprietary LLMs in multi-session conversations with varying context lengths up to 100k tokens. We benchmark with various prompting, iterative feedback, and retrieval-augmented generation methods. Our benchmarking effort reveals that state-of-the-art LLMs face significant challenges in proactively following users' preferences during conversations. In particular, in zero-shot settings, preference following accuracy falls below 10% at merely 10 turns (~3k tokens) across most evaluated models. Even with advanced prompting and retrieval methods, preference following still deteriorates in long-context conversations. Furthermore, we show that fine-tuning on PrefEval significantly improves performance. We believe PrefEval serves as a valuable resource for measuring, understanding, and enhancing LLMs' preference following abilities, paving the way for personalized conversational agents. Our code and dataset are available at https://prefeval.github.io/.