🤖 AI Summary
This paper addresses the multi-source-to-single-target non-overlapping cross-domain sequential recommendation (MNCSR) task, tackling three key challenges: strong ID dependency, semantic loss from explicit alignment, and difficulty in fusing heterogeneous multi-source semantics. We propose a text-enhanced co-attention prompt learning framework that decouples recommendation from user/item IDs and explicit cross-domain alignment. Specifically, we design dual prompts—domain-shared and domain-specific—grounded in text-based semantic embeddings to capture fine-grained item semantics. Our method employs a two-stage training strategy: joint pretraining across all source domains followed by target-domain prompt fine-tuning. This enables effective, ID-free, fine-grained semantic transfer without requiring overlapping user-item identities or explicit alignment. Extensive experiments on three benchmark datasets demonstrate significant improvements over state-of-the-art methods. The implementation is publicly available.
📝 Abstract
Non-overlapping Cross-domain Sequential Recommendation (NCSR) is the task that focuses on domain knowledge transfer without overlapping entities. Compared with traditional Cross-domain Sequential Recommendation (CSR), NCSR poses several challenges: 1) NCSR methods often rely on explicit item IDs, overlooking semantic information among entities. 2) Existing CSR mainly relies on domain alignment for knowledge transfer, risking semantic loss during alignment. 3) Most previous studies do not consider the many-to-one characteristic, which is challenging because of the utilization of multiple source domains. Given the above challenges, we introduce the prompt learning technique for Many-to-one Non-overlapping Cross-domain Sequential Recommendation (MNCSR) and propose a Text-enhanced Co-attention Prompt Learning Paradigm (TCPLP). Specifically, we capture semantic meanings by representing items through text rather than IDs, leveraging natural language universality to facilitate cross-domain knowledge transfer. Unlike prior works that need to conduct domain alignment, we directly learn transferable domain information, where two types of prompts, i.e., domain-shared and domain-specific prompts, are devised, with a co-attention-based network for prompt encoding. Then, we develop a two-stage learning strategy, i.e., pre-train&prompt-tuning paradigm, for domain knowledge pre-learning and transferring, respectively. We conduct extensive experiments on three datasets and the experimental results demonstrate the superiority of our TCPLP. Our source codes have been publicly released.