Self-Supervised Representation Learning with ID-Content Modality Alignment for Sequential Recommendation

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance degradation of ID-based sequential recommendation (SR) under sparse interaction histories, this paper proposes a self-supervised representation learning framework that aligns ID and multimodal content modalities. Methodologically, it (1) disentangles collaborative dependencies from content dependencies; (2) introduces an LLM-driven sample construction mechanism and a two-stage training strategy; and (3) incorporates a hybrid-modal sequence decoder. By unifying supervised fine-tuning, contrastive learning, and self-supervised learning, the framework jointly optimizes ID embeddings and multimodal content representations, thereby bridging semantic gaps across modalities and jointly modeling behavioral and content preferences. Extensive experiments on four video streaming datasets demonstrate substantial improvements over state-of-the-art methods: +8.04% in NDCG@5 and +6.62% in NDCG@10.

Technology Category

Application Category

📝 Abstract
Sequential recommendation (SR) models often capture user preferences based on the historically interacted item IDs, which usually obtain sub-optimal performance when the interaction history is limited. Content-based sequential recommendation has recently emerged as a promising direction that exploits items' textual and visual features to enhance preference learning. However, there are still three key challenges: (i) how to reduce the semantic gap between different content modality representations; (ii) how to jointly model user behavior preferences and content preferences; and (iii) how to design an effective training strategy to align ID representations and content representations. To address these challenges, we propose a novel model, self-supervised representation learning with ID-Content modality alignment, named SICSRec. Firstly, we propose a LLM-driven sample construction method and develop a supervised fine-tuning approach to align item-level modality representations. Secondly, we design a novel Transformer-based sequential model, where an ID-modality sequence encoder captures user behavior preferences, a content-modality sequence encoder learns user content preferences, and a mix-modality sequence decoder grasps the intrinsic relationship between these two types of preferences. Thirdly, we propose a two-step training strategy with a content-aware contrastive learning task to align modality representations and ID representations, which decouples the training process of content modality dependency and item collaborative dependency. Extensive experiments conducted on four public video streaming datasets demonstrate our SICSRec outperforms the state-of-the-art ID-modality sequential recommenders and content-modality sequential recommenders by 8.04% on NDCG@5 and 6.62% on NDCD@10 on average, respectively.
Problem

Research questions and friction points this paper is trying to address.

Aligning item-level modality representations to reduce semantic gaps
Jointly modeling user behavior and content preferences in sequences
Developing training strategies for ID-content representation alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven sample construction aligns modality representations
Transformer-based model captures behavior and content preferences
Two-step training strategy with contrastive learning aligns representations
🔎 Similar Papers
No similar papers found.
D
Donglin Zhou
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
Weike Pan
Weike Pan
Professor, Shenzhen University
Recommender SystemsDeep LearningTransfer LearningFederated LearningMachine Learning
Z
Zhong Ming
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen 518123, China