🤖 AI Summary
To address severe cold-start issues and over-reliance on collaborative signals in sequential recommendation (SR), as well as high inference latency, insufficient distribution alignment, and catastrophic forgetting in large language model (LLM) deployment, this paper proposes PAD—a Pretraining–Alignment–Decoupling framework. PAD employs dual-path (collaborative/textual) pretraining to enhance semantic understanding; introduces a recommendation-task-anchored multi-kernel maximum mean discrepancy (MMD) alignment loss for fine-grained distribution matching; and designs a frequency-aware three-expert decoupled fine-tuning architecture—comprising an alignment expert and two modality-specific experts—to jointly preserve cross-modal consistency and modality specificity. Evaluated on three public benchmarks, PAD significantly improves overall recommendation performance, especially for cold items. It is compatible with diverse SR backbone models, demonstrating strong generalizability and reproducibility.
📝 Abstract
Sequential Recommendation (SR) aims to leverage the sequential patterns in users' historical interactions to accurately track their preferences. However, the primary reliance of existing SR methods on collaborative data results in challenges such as the cold-start problem and sub-optimal performance. Concurrently, despite the proven effectiveness of large language models (LLMs), their integration into commercial recommender systems is impeded by issues such as high inference latency, incomplete capture of all distribution statistics, and catastrophic forgetting. To address these issues, we introduce a novel Pre-train, Align, and Disentangle (PAD) framework to enhance SR models with LLMs. In particular, we initially pre-train both the SR and LLM models to obtain collaborative and textual embeddings. Subsequently, we propose a characteristic recommendation-anchored alignment loss using multi-kernel maximum mean discrepancy with Gaussian kernels. Lastly, a triple-experts architecture, comprising aligned and modality-specific experts with disentangled embeddings, is fine-tuned in a frequency-aware manner. Experimental results on three public datasets validate the efficacy of PAD, indicating substantial enhancements and compatibility with various SR backbone models, particularly for cold items. The code and datasets are accessible for reproduction at https://github.com/Applied-Machine-Learning-Lab/PAD.