Generating Usage-related Questions for Preference Elicitation in Conversational Recommender Systems

📅 2021-11-26
🏛️ Trans. Recomm. Syst.
📈 Citations: 26
Influential: 1
📄 PDF
🤖 AI Summary
To address the challenge of preference elicitation from new users with limited domain knowledge about item attributes, this paper proposes a usage-scenario-oriented implicit preference elicitation method. Methodologically, it introduces (1) a novel “usage-guided” question generation paradigm that models users’ actual usage intentions to produce highly relevant questions; (2) a multi-stage crowdsourcing annotation protocol to construct high-quality training data; and (3) a dual-track model architecture integrating template matching and neural generation (T5/BART). Under data-scarce conditions, the proposed model significantly outperforms baseline methods. Both automatic evaluation (BLEU/METEOR) and human evaluation (pointwise and pairwise) show strong agreement, confirming that the generated questions exhibit superior comprehensibility, relevance, and practical utility—effectively lowering users’ cognitive barriers to preference expression.
📝 Abstract
A key distinguishing feature of conversational recommender systems over traditional recommender systems is their ability to elicit user preferences using natural language. Currently, the predominant approach to preference elicitation is to ask questions directly about items or item attributes. Users searching for recommendations may not have deep knowledge of the available options in a given domain. As such, they might not be aware of key attributes or desirable values for them. However, in many settings, talking about the planned use of items does not present any difficulties, even for those that are new to a domain. In this paper, we propose a novel approach to preference elicitation by asking implicit questions based on item usage. As one of the main contributions of this work, we develop a multi-stage data annotation protocol using crowdsourcing, to create a high-quality labeled training dataset. Another main contribution is the development of four models for the question generation task: two template-based baseline models and two neural text-to-text models. The template-based models use heuristically extracted common patterns found in the training data, while the neural models use the training data to learn to generate questions automatically. Using common metrics from machine translation for automatic evaluation, we show that our approaches are effective in generating elicitation questions, even with limited training data. We further employ human evaluation for comparing the generated questions using both pointwise and pairwise evaluation designs. We find that the human evaluation results are consistent with the automatic ones, allowing us to draw conclusions about the quality of the generated questions with certainty. Finally, we provide a detailed analysis of cases where the models show their limitations.
Problem

Research questions and friction points this paper is trying to address.

Generates usage-based questions for preference elicitation in conversational recommenders
Develops models for automatic question generation from item usage data
Evaluates question quality through both automatic metrics and human assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates implicit questions based on item usage
Uses crowdsourcing for multi-stage data annotation
Develops template-based and neural text-to-text models
🔎 Similar Papers
No similar papers found.