🤖 AI Summary
Existing cross-domain sequential recommendation methods model only domain-level transfer, neglecting preference evolution signals embedded in feedback-type transfer (e.g., positive vs. negative interactions), leading to insufficient modeling. This paper proposes the first framework jointly capturing domain transfer and feedback transfer dynamics. We introduce a transition-aware graph encoder to model cross-domain behavioral structures, a masked cross-transfer multi-head self-attention mechanism to integrate dual-dimensional temporal dependencies, and a contrastive loss to align domain- and feedback-transfer representations. The method synergistically unifies graph representation learning, sequential modeling, and contrastive learning. Extensive experiments on two public benchmarks demonstrate significant improvements over state-of-the-art methods, validating that joint modeling of dual transfers enhances both recommendation accuracy and generalization capability.
📝 Abstract
Nowadays, many recommender systems encompass various domains to cater to users' diverse needs, leading to user behaviors transitioning across different domains. In fact, user behaviors across different domains reveal changes in preference toward recommended items. For instance, a shift from negative feedback to positive feedback indicates improved user satisfaction. However, existing cross-domain sequential recommendation methods typically model user interests by focusing solely on information about domain transitions, often overlooking the valuable insights provided by users' feedback transitions. In this paper, we propose $ ext{Transition}^2$, a novel method to model transitions across both domains and types of user feedback. Specifically, $ ext{Transition}^2$ introduces a transition-aware graph encoder based on user history, assigning different weights to edges according to the feedback type. This enables the graph encoder to extract historical embeddings that capture the transition information between different domains and feedback types. Subsequently, we encode the user history using a cross-transition multi-head self-attention, incorporating various masks to distinguish different types of transitions. To further enhance representation learning, we employ contrastive losses to align transitions across domains and feedback types. Finally, we integrate these modules to make predictions across different domains. Experimental results on two public datasets demonstrate the effectiveness of $ ext{Transition}^2$.