🤖 AI Summary
This study investigates user preferences for item-feature-based explanation styles—unidirectional positive versus bidirectional positive-negative—in movie recommendation systems, and their impact on perceived trust, transparency, effectiveness, and satisfaction.
Method: A controlled online user experiment (N=129) employed Likert-scale measurements to quantify perceptions, with recommendation items and feature dimensions held constant to isolate the effect of explanation style.
Contribution/Results: In this low-risk entertainment context, users significantly preferred concise unidirectional positive explanations; all four perceptual metrics scored significantly higher than for bidirectional explanations—challenging the prevailing assumption that balanced (bidirectional) explanations are universally effective. The findings underscore the necessity of domain-specific explanation design and provide the first empirical evidence and practical guidelines for explainable interface development in entertainment-oriented recommender systems.
📝 Abstract
Recommender systems play a vital role in helping users discover content in streaming services, but their effectiveness depends on users understanding why items are recommended. In this study, explanations were based solely on item features rather than personalized data, simulating recommendation scenarios. We compared user perceptions of one-sided (purely positive) and two-sided (positive and negative) feature-based explanations for popular movie recommendations. Through an online study with 129 participants, we examined how explanation style affected perceived trust, transparency, effectiveness, and satisfaction. One-sided explanations consistently received higher ratings across all dimensions. Our findings suggest that in low-stakes entertainment domains such as popular movie recommendations, simpler positive explanations may be more effective. However, the results should be interpreted with caution due to potential confounding factors such as item familiarity and the placement of negative information in explanations. This work provides practical insights for explanation design in recommender interfaces and highlights the importance of context in shaping user preferences.