🤖 AI Summary
Traditional recommender systems rely heavily on explicit feedback, failing to capture the duality of user decision-making—namely, intrinsic long-term value (“enrichment”) versus immediate appeal (“temptation”)—and thus tend to optimize for short-term engagement at the expense of long-term user satisfaction. To address this, we introduce, for the first time in recommendation, the behavioral economics “dual-self” theory, proposing a bi-objective utility decomposition model that jointly models enrichment and temptation. We further design a low-assumption learning framework that jointly leverages explicit feedback and implicit choice data to estimate both components, optimizing explicitly for enrichment maximization. Extensive experiments on real-world (e.g., MovieLens) and synthetic datasets demonstrate that our method significantly outperforms single-utility baselines, achieving a 23.6% improvement in enrichment. These results validate the effectiveness and robustness of long-term value–oriented recommendation.
📝 Abstract
Traditional recommender systems based on utility maximization and revealed preferences often fail to capture users' dual-self nature, where consumption choices are driven by both long-term benefits (enrichment) and desire for instant gratification (temptation). Consequently, these systems may generate recommendations that fail to provide long-lasting satisfaction to users. To address this issue, we propose a novel user model that accounts for this dual-self behavior and develop an optimal recommendation strategy to maximize enrichment from consumption. We highlight the limitations of historical consumption data in implementing this strategy and present an estimation framework that makes minimal assumptions and leverages explicit user feedback and implicit choice data to overcome these constraints. We evaluate our approach through both synthetic simulations and simulations based on real-world data from the MovieLens dataset. Results demonstrate that our proposed recommender can deliver superior enrichment compared to several competitive baseline algorithms that assume a single utility type and rely solely on revealed preferences. Our work emphasizes the critical importance of optimizing for enrichment in recommender systems, particularly in temptation-laden consumption contexts. Our findings have significant implications for content platforms, user experience design, and the development of responsible AI systems, paving the way for more nuanced and user-centric recommendation approaches.