PersonaLens: A Benchmark for Personalization Evaluation in Conversational AI Assistants

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A systematic benchmark for evaluating the personalization capability of task-oriented dialogue AI systems is currently lacking. Method: This paper introduces the first benchmark specifically designed to assess personalization in task-oriented assistants, featuring diverse user personas and realistic interaction histories. It proposes a dual-agent automated evaluation framework powered by large language models (LLMs): one agent simulates users via behavior modeling, structured preference injection, multi-dimensional prompt engineering, and dialogue trajectory generation; the other acts as an LLM-as-a-Judge evaluator. Contribution/Results: The work formally defines and quantifies personalization capability for the first time, unifying assessment of personalization degree, response quality, and task completion. Empirical evaluation across mainstream LLM-based assistants reveals substantial performance disparities in personalization, establishing a reproducible benchmark and delivering actionable insights for model improvement and evaluation standardization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have advanced conversational AI assistants. However, systematically evaluating how well these assistants apply personalization--adapting to individual user preferences while completing tasks--remains challenging. Existing personalization benchmarks focus on chit-chat, non-conversational tasks, or narrow domains, failing to capture the complexities of personalized task-oriented assistance. To address this, we introduce PersonaLens, a comprehensive benchmark for evaluating personalization in task-oriented AI assistants. Our benchmark features diverse user profiles equipped with rich preferences and interaction histories, along with two specialized LLM-based agents: a user agent that engages in realistic task-oriented dialogues with AI assistants, and a judge agent that employs the LLM-as-a-Judge paradigm to assess personalization, response quality, and task success. Through extensive experiments with current LLM assistants across diverse tasks, we reveal significant variability in their personalization capabilities, providing crucial insights for advancing conversational AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating personalization in task-oriented AI assistants
Assessing adaptation to user preferences during tasks
Addressing gaps in existing personalization benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces PersonaLens benchmark for personalization evaluation
Uses user and judge agents for realistic assessment
Employs LLM-as-a-Judge paradigm for comprehensive analysis
🔎 Similar Papers
No similar papers found.