Semantic-enhanced Co-attention Prompt Learning for Non-overlapping Cross-Domain Recommendation

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the multi-source-to-single-target non-overlapping cross-domain sequential recommendation (MNCSR) task, tackling three key challenges: strong ID dependency, semantic loss from explicit alignment, and difficulty in fusing heterogeneous multi-source semantics. We propose a text-enhanced co-attention prompt learning framework that decouples recommendation from user/item IDs and explicit cross-domain alignment. Specifically, we design dual prompts—domain-shared and domain-specific—grounded in text-based semantic embeddings to capture fine-grained item semantics. Our method employs a two-stage training strategy: joint pretraining across all source domains followed by target-domain prompt fine-tuning. This enables effective, ID-free, fine-grained semantic transfer without requiring overlapping user-item identities or explicit alignment. Extensive experiments on three benchmark datasets demonstrate significant improvements over state-of-the-art methods. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Non-overlapping Cross-domain Sequential Recommendation (NCSR) is the task that focuses on domain knowledge transfer without overlapping entities. Compared with traditional Cross-domain Sequential Recommendation (CSR), NCSR poses several challenges: 1) NCSR methods often rely on explicit item IDs, overlooking semantic information among entities. 2) Existing CSR mainly relies on domain alignment for knowledge transfer, risking semantic loss during alignment. 3) Most previous studies do not consider the many-to-one characteristic, which is challenging because of the utilization of multiple source domains. Given the above challenges, we introduce the prompt learning technique for Many-to-one Non-overlapping Cross-domain Sequential Recommendation (MNCSR) and propose a Text-enhanced Co-attention Prompt Learning Paradigm (TCPLP). Specifically, we capture semantic meanings by representing items through text rather than IDs, leveraging natural language universality to facilitate cross-domain knowledge transfer. Unlike prior works that need to conduct domain alignment, we directly learn transferable domain information, where two types of prompts, i.e., domain-shared and domain-specific prompts, are devised, with a co-attention-based network for prompt encoding. Then, we develop a two-stage learning strategy, i.e., pre-train&prompt-tuning paradigm, for domain knowledge pre-learning and transferring, respectively. We conduct extensive experiments on three datasets and the experimental results demonstrate the superiority of our TCPLP. Our source codes have been publicly released.
Problem

Research questions and friction points this paper is trying to address.

Enhancing semantic information in non-overlapping cross-domain recommendation
Avoiding semantic loss during domain alignment in knowledge transfer
Addressing many-to-one challenges in multi-source domain utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-enhanced item representation for semantic capture
Co-attention prompt encoding for domain transfer
Two-stage pre-train and prompt-tuning strategy
🔎 Similar Papers
No similar papers found.
L
Lei Guo
Shandong Normal University, Jinan, Shandong, China, 250358
C
Chenlong Song
Shandong Normal University, Jinan, Shandong, China, 250358
F
Feng Guo
Liaocheng University, Liaocheng, Shandong, China, 252000
Xiaohui Han
Xiaohui Han
Qilu University of Technology (Shandong Academy of Science)
Machine LearningCyber Security
X
Xiaojun Chang
Shandong Normal University, Jinan, Shandong, China, 250358
L
Lei Zhu
Tongji University, Shanghai, China, 200092