Multimodal Generative Recommendation for Fusing Semantic and Collaborative Signals

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited performance of existing generative recommendation methods in large-scale item scenarios, where they often fail to surpass conventional sequential models. To overcome this challenge, the authors propose MSCGRec, a novel framework that innovatively integrates multimodal semantics with collaborative signals. Specifically, it employs DINO-based self-supervised quantized learning to enhance visual representations, treats collaborative features as an independent modality for fusion, and introduces a constrained sequence generation mechanism to reduce the output space and improve generation efficiency. Extensive experiments on three large-scale real-world datasets demonstrate that MSCGRec significantly outperforms current state-of-the-art generative and sequential recommendation approaches. Ablation studies further confirm the effectiveness of each proposed component.

Technology Category

Application Category

📝 Abstract
Sequential recommender systems rank relevant items by modeling a user's interaction history and computing the inner product between the resulting user representation and stored item embeddings. To avoid the significant memory overhead of storing large item sets, the generative recommendation paradigm instead models each item as a series of discrete semantic codes. Here, the next item is predicted by an autoregressive model that generates the code sequence corresponding to the predicted item. However, despite promising ranking capabilities on small datasets, these methods have yet to surpass traditional sequential recommenders on large item sets, limiting their adoption in the very scenarios they were designed to address. To resolve this, we propose MSCGRec, a Multimodal Semantic and Collaborative Generative Recommender. MSCGRec incorporates multiple semantic modalities and introduces a novel self-supervised quantization learning approach for images based on the DINO framework. Additionally, MSCGRec fuses collaborative and semantic signals by extracting collaborative features from sequential recommenders and treating them as a separate modality. Finally, we propose constrained sequence learning that restricts the large output space during training to the set of permissible tokens. We empirically demonstrate on three large real-world datasets that MSCGRec outperforms both sequential and generative recommendation baselines and provide an extensive ablation study to validate the impact of each component.
Problem

Research questions and friction points this paper is trying to address.

generative recommendation
large item sets
sequential recommendation
semantic codes
ranking performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Generative Recommendation
Self-supervised Quantization
Collaborative-Semantic Fusion
Constrained Sequence Learning
DINO-based Image Encoding
🔎 Similar Papers
No similar papers found.