LLM-EDT: Large Language Model Enhanced Cross-domain Sequential Recommendation with Dual-phase Training

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-domain sequential recommendation (CDSR) faces three key challenges: domain imbalance in interaction data, difficulty in transferring user preferences across domains, and coarse-grained user profiling. To address these, we propose an LLM-enhanced CDSR framework. Our method employs a large language model as a unified generator and encoder, integrating adaptive behavior generation, domain-shared contextual modeling, and domain-specific preference aggregation. Key contributions include: (1) a transferable item enhancer to mitigate domain-dominant bias; (2) a two-stage training strategy that decouples noise suppression from preference modeling; and (3) a domain-aware user profiling module that leverages LLMs’ world knowledge for fine-grained user representation. Extensive experiments on three public benchmarks demonstrate significant improvements over state-of-the-art methods. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Cross-domain Sequential Recommendation (CDSR) has been proposed to enrich user-item interactions by incorporating information from various domains. Despite current progress, the imbalance issue and transition issue hinder further development of CDSR. The former one presents a phenomenon that the interactions in one domain dominate the entire behavior, leading to difficulty in capturing the domain-specific features in the other domain. The latter points to the difficulty in capturing users' cross-domain preferences within the mixed interaction sequence, resulting in poor next-item prediction performance for specific domains. With world knowledge and powerful reasoning ability, Large Language Models (LLMs) partially alleviate the above issues by performing as a generator and an encoder. However, current LLMs-enhanced CDSR methods are still under exploration, which fail to recognize the irrelevant noise and rough profiling problems. Thus, to make peace with the aforementioned challenges, we proposed an LLMs Enhanced Cross-domain Sequential Recommendation with Dual-phase Training ({LLM-EDT}). To address the imbalance issue while introducing less irrelevant noise, we first propose the transferable item augmenter to adaptively generate possible cross-domain behaviors for users. Then, to alleviate the transition issue, we introduce a dual-phase training strategy to empower the domain-specific thread with a domain-shared background. As for the rough profiling problem, we devise a domain-aware profiling module to summarize the user's preference in each domain and adaptively aggregate them to generate comprehensive user profiles. The experiments on three public datasets validate the effectiveness of our proposed LLM-EDT. To ease reproducibility, we have released the detailed code online at {https://anonymous.4open.science/r/LLM-EDT-583F}.
Problem

Research questions and friction points this paper is trying to address.

Addressing cross-domain sequential recommendation imbalance and transition issues
Reducing irrelevant noise while generating cross-domain user behaviors
Creating comprehensive user profiles to overcome rough preference summarization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transferable item augmenter generates cross-domain behaviors adaptively
Dual-phase training strategy empowers domain-specific thread with shared background
Domain-aware profiling module summarizes and aggregates user preferences per domain
🔎 Similar Papers
No similar papers found.
Ziwei Liu
Ziwei Liu
Associate Professor, Nanyang Technological University
Computer VisionMachine LearningComputer Graphics
Qidong Liu
Qidong Liu
Assistant Professor, Xi'an Jiaotong University
Recommender SystemLarge Language ModelIntelligent HealthcareCausal InferenceSmart Education
W
Wanyu Wang
City University of Hong Kong
Yejing Wang
Yejing Wang
City University of Hong Kong
P
Peng Chuan
Huawei Technologies
T
Tong Xu
University of Science and Technology of China
W
Wei Huang
Independent Researcher
C
Chong Chen
Tsinghua University
X
Xiangyu Zhao
City University of Hong Kong