X-Cross: Dynamic Integration of Language Models for Cross-Domain Sequential Recommendation

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency of frequent full retraining during rapid cold-start adaptation of recommendation systems to new domains, this paper proposes a cross-domain sequential recommendation framework based on dynamic, layer-wise fusion of multiple LoRA-finetuned domain-specific language models. The core innovation is a layer-wise dynamic knowledge crossing mechanism that preserves domain-specific characteristics while enabling adaptive parameter sharing across domains. Integrated with cross-domain prompt guidance and layer-wise representation refinement, the method achieves performance comparable to full fine-tuning using only 25% additional parameters. Extensive cross-domain transfer experiments on the Amazon dataset—e.g., from Toys to Tools, Electronics, or Sports—demonstrate substantial improvements in recommendation accuracy over strong baselines, alongside a 50–75% reduction in required fine-tuning data volume.

Technology Category

Application Category

📝 Abstract
As new products are emerging daily, recommendation systems are required to quickly adapt to possible new domains without needing extensive retraining. This work presents ``X-Cross'' -- a novel cross-domain sequential-recommendation model that recommends products in new domains by integrating several domain-specific language models; each model is fine-tuned with low-rank adapters (LoRA). Given a recommendation prompt, operating layer by layer, X-Cross dynamically refines the representation of each source language model by integrating knowledge from all other models. These refined representations are propagated from one layer to the next, leveraging the activations from each domain adapter to ensure domain-specific nuances are preserved while enabling adaptability across domains. Using Amazon datasets for sequential recommendation, X-Cross achieves performance comparable to a model that is fine-tuned with LoRA, while using only 25% of the additional parameters. In cross-domain tasks, such as adapting from Toys domain to Tools, Electronics or Sports, X-Cross demonstrates robust performance, while requiring about 50%-75% less fine-tuning data than LoRA to make fine-tuning effective. Furthermore, X-Cross achieves significant improvement in accuracy over alternative cross-domain baselines. Overall, X-Cross enables scalable and adaptive cross-domain recommendations, reducing computational overhead and providing an efficient solution for data-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

Adapts to new domains without extensive retraining
Dynamically integrates multiple domain-specific language models
Reduces fine-tuning data and computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic integration of domain-specific language models
Layer-wise refinement with low-rank adapters (LoRA)
Cross-domain knowledge propagation preserving domain nuances
🔎 Similar Papers
No similar papers found.
G
Guy Hadad
Ben-Gurion University of the Negev, Beer Sheva, Israel
Haggai Roitman
Haggai Roitman
Amazon & Ben Gurion University of the Negev
Information RetrievalRecommender SystemsNLPeCommerceSocial Media
Y
Yotam Eshel
eBay, Netanya, Israel
Bracha Shapira
Bracha Shapira
Ben-Gurion University of the Negev
Machine LearningRecommender SystemsCyber Security
L
L. Rokach
Ben-Gurion University of the Negev, Beer Sheva, Israel