Modeling Domain and Feedback Transitions for Cross-Domain Sequential Recommendation

📅 2024-08-15
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing cross-domain sequential recommendation methods model only domain-level transfer, neglecting preference evolution signals embedded in feedback-type transfer (e.g., positive vs. negative interactions), leading to insufficient modeling. This paper proposes the first framework jointly capturing domain transfer and feedback transfer dynamics. We introduce a transition-aware graph encoder to model cross-domain behavioral structures, a masked cross-transfer multi-head self-attention mechanism to integrate dual-dimensional temporal dependencies, and a contrastive loss to align domain- and feedback-transfer representations. The method synergistically unifies graph representation learning, sequential modeling, and contrastive learning. Extensive experiments on two public benchmarks demonstrate significant improvements over state-of-the-art methods, validating that joint modeling of dual transfers enhances both recommendation accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
Nowadays, many recommender systems encompass various domains to cater to users' diverse needs, leading to user behaviors transitioning across different domains. In fact, user behaviors across different domains reveal changes in preference toward recommended items. For instance, a shift from negative feedback to positive feedback indicates improved user satisfaction. However, existing cross-domain sequential recommendation methods typically model user interests by focusing solely on information about domain transitions, often overlooking the valuable insights provided by users' feedback transitions. In this paper, we propose $ ext{Transition}^2$, a novel method to model transitions across both domains and types of user feedback. Specifically, $ ext{Transition}^2$ introduces a transition-aware graph encoder based on user history, assigning different weights to edges according to the feedback type. This enables the graph encoder to extract historical embeddings that capture the transition information between different domains and feedback types. Subsequently, we encode the user history using a cross-transition multi-head self-attention, incorporating various masks to distinguish different types of transitions. To further enhance representation learning, we employ contrastive losses to align transitions across domains and feedback types. Finally, we integrate these modules to make predictions across different domains. Experimental results on two public datasets demonstrate the effectiveness of $ ext{Transition}^2$.
Problem

Research questions and friction points this paper is trying to address.

Modeling user transitions across domains and feedback types
Capturing preference changes via domain and feedback transitions
Enhancing cross-domain recommendations with transition-aware modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Models domain and feedback transitions jointly
Uses transition-aware graph encoder with weighted edges
Employs cross-transition multi-head self-attention
🔎 Similar Papers
No similar papers found.
Changshuo Zhang
Changshuo Zhang
Renmin University of China
Recommender SystemsReinforcement LearningLarge Language Model
Teng Shi
Teng Shi
Renmin University of China
Recommender SystemInformation Retrieval
X
Xiao Zhang
Gaoling School of AI, Renmin University of China
Q
Qi Liu
Wechat, Tencent
Ruobing Xie
Ruobing Xie
Tencent
Large Language ModelRecommender SystemNatural Language Processing
J
Jun Xu
Gaoling School of AI, Renmin University of China
J
Jirong Wen
Gaoling School of AI, Renmin University of China