🤖 AI Summary
This work proposes the Long- and Short-term Aspect Interest Transformer (LSA), a novel approach to aspect-level recommendation that addresses the challenge of capturing the dynamic evolution of user interests, which often leads to inaccurate aspect weight assignments in interactions. LSA is the first to integrate long- and short-term interest modeling into aspect-based recommendation by leveraging a Transformer architecture to separately capture users’ long-term global behavioral patterns and short-term recent interaction preferences. Furthermore, it incorporates user–item–aspect relationships from a graph structure to dynamically assess aspect importance. Evaluated on four real-world datasets, LSA achieves an average 2.55% improvement in MSE over the strongest baseline, demonstrating significantly enhanced accuracy in assigning aspect weights at the interaction level.
📝 Abstract
Aspect-based recommendation methods extract aspect terms from reviews, such as price, to model fine-grained user preferences on items, making them a critical approach in personalized recommender systems. Existing methods utilize graphs to represent the relationships among users, items, and aspect terms, modeling user preferences based on graph neural networks. However, they overlook the dynamic nature of user interests - users may temporarily focus on aspects they previously paid little attention to - making it difficult to assign accurate weights to aspect terms for each user-item interaction. In this paper, we propose a long-short-term aspect interest Transformer (LSA) for aspect-based recommendation, which effectively captures the dynamic nature of user preferences by integrating both long-term and short-term aspect interests. Specifically, the short-term interests model the temporal changes in the importance of recently interacted aspect terms, while the long-term interests consider global behavioral patterns, including aspects that users have not interacted with recently. Finally, LSA combines long- and short-term interests to evaluate the importance of aspects within the union of user and item aspect neighbors, therefore accurately assigns aspect weights for each user-item interaction. Experiments conducted on four real-world datasets demonstrate that LSA improves MSE by 2.55% on average over the best baseline.