🤖 AI Summary
A systematic benchmark for evaluating the personalization capability of task-oriented dialogue AI systems is currently lacking. Method: This paper introduces the first benchmark specifically designed to assess personalization in task-oriented assistants, featuring diverse user personas and realistic interaction histories. It proposes a dual-agent automated evaluation framework powered by large language models (LLMs): one agent simulates users via behavior modeling, structured preference injection, multi-dimensional prompt engineering, and dialogue trajectory generation; the other acts as an LLM-as-a-Judge evaluator. Contribution/Results: The work formally defines and quantifies personalization capability for the first time, unifying assessment of personalization degree, response quality, and task completion. Empirical evaluation across mainstream LLM-based assistants reveals substantial performance disparities in personalization, establishing a reproducible benchmark and delivering actionable insights for model improvement and evaluation standardization.
📝 Abstract
Large language models (LLMs) have advanced conversational AI assistants. However, systematically evaluating how well these assistants apply personalization--adapting to individual user preferences while completing tasks--remains challenging. Existing personalization benchmarks focus on chit-chat, non-conversational tasks, or narrow domains, failing to capture the complexities of personalized task-oriented assistance. To address this, we introduce PersonaLens, a comprehensive benchmark for evaluating personalization in task-oriented AI assistants. Our benchmark features diverse user profiles equipped with rich preferences and interaction histories, along with two specialized LLM-based agents: a user agent that engages in realistic task-oriented dialogues with AI assistants, and a judge agent that employs the LLM-as-a-Judge paradigm to assess personalization, response quality, and task success. Through extensive experiments with current LLM assistants across diverse tasks, we reveal significant variability in their personalization capabilities, providing crucial insights for advancing conversational AI systems.