🤖 AI Summary
Existing LLM-based sequential recommendation methods face two critical challenges: embedding collapse induced by collaborative embedding integration and catastrophic forgetting caused by semantic ID quantization—both severely limiting model scalability and recommendation performance. To address these, we propose MME-SID, a novel framework that synergistically integrates multimodal embeddings with quantized semantic IDs. Specifically, MM-RQ-VAE—incorporating maximum mean discrepancy and contrastive learning—models cross-modal correlations while preserving intra-modal structure; its pretrained multimodal codebook initialization mitigates forgetting. We further employ Llama3-8B-instruct augmented with LoRA for efficient fine-tuning, enabling joint modeling of frequency-aware collaborative filtering and semantic IDs. Evaluated on three public benchmarks, MME-SID consistently outperforms state-of-the-art methods, effectively suppressing embedding collapse and catastrophic forgetting, while substantially improving recommendation accuracy and model scalability.
📝 Abstract
Sequential recommendation (SR) aims to capture users' dynamic interests and sequential patterns based on their historical interactions. Recently, the powerful capabilities of large language models (LLMs) have driven their adoption in SR. However, we identify two critical challenges in existing LLM-based SR methods: 1) embedding collapse when incorporating pre-trained collaborative embeddings and 2) catastrophic forgetting of quantized embeddings when utilizing semantic IDs. These issues dampen the model scalability and lead to suboptimal recommendation performance. Therefore, based on LLMs like Llama3-8B-instruct, we introduce a novel SR framework named MME-SID, which integrates multimodal embeddings and quantized embeddings to mitigate embedding collapse. Additionally, we propose a Multimodal Residual Quantized Variational Autoencoder (MM-RQ-VAE) with maximum mean discrepancy as the reconstruction loss and contrastive learning for alignment, which effectively preserve intra-modal distance information and capture inter-modal correlations, respectively. To further alleviate catastrophic forgetting, we initialize the model with the trained multimodal code embeddings. Finally, we fine-tune the LLM efficiently using LoRA in a multimodal frequency-aware fusion manner. Extensive experiments on three public datasets validate the superior performance of MME-SID thanks to its capability to mitigate embedding collapse and catastrophic forgetting. The implementation code and datasets are publicly available for reproduction: https://github.com/Applied-Machine-Learning-Lab/MME-SID.