Q-BERT4Rec: Quantized Semantic-ID Representation Learning for Multimodal Recommendation

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer-based sequential recommendation methods (e.g., BERT4Rec) rely solely on discrete item IDs, neglecting rich multimodal semantic information—such as text and images—leading to poor generalization and limited interpretability. To address this, we propose the first semantic unification and quantization framework for multimodal sequential recommendation. Our approach: (i) introduces cross-modal semantic injection to map heterogeneous modalities (text, images) into a unified semantic ID space; (ii) pioneers residual vector quantization (RVQ) for semantic tokenization in sequential recommendation; and (iii) designs a multi-region masking pretraining strategy (span/tail/multi-region) to enhance sequential understanding and generalization. Extensive experiments on Amazon multimodal benchmarks demonstrate significant improvements over state-of-the-art methods—including BERT4Rec and CLIP4Rec—validating that semantic tokenization simultaneously boosts both recommendation accuracy and model interpretability.

Technology Category

Application Category

📝 Abstract
Sequential recommendation plays a critical role in modern online platforms such as e-commerce, advertising, and content streaming, where accurately predicting users'next interactions is essential for personalization. Recent Transformer-based methods like BERT4Rec have shown strong modeling capability, yet they still rely on discrete item IDs that lack semantic meaning and ignore rich multimodal information (e.g., text and image). This leads to weak generalization and limited interpretability. To address these challenges, we propose Q-Bert4Rec, a multimodal sequential recommendation framework that unifies semantic representation and quantized modeling. Specifically, Q-Bert4Rec consists of three stages: (1) cross-modal semantic injection, which enriches randomly initialized ID embeddings through a dynamic transformer that fuses textual, visual, and structural features; (2) semantic quantization, which discretizes fused representations into meaningful tokens via residual vector quantization; and (3) multi-mask pretraining and fine-tuning, which leverage diverse masking strategies -- span, tail, and multi-region -- to improve sequential understanding. We validate our model on public Amazon benchmarks and demonstrate that Q-Bert4Rec significantly outperforms many strong existing methods, confirming the effectiveness of semantic tokenization for multimodal sequential recommendation. Our source code will be publicly available on GitHub after publishing.
Problem

Research questions and friction points this paper is trying to address.

Enhances item embeddings with multimodal semantic information
Replaces discrete IDs with quantized semantic tokens for recommendations
Improves generalization and interpretability in sequential recommendation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal semantic injection enriches ID embeddings with multimodal features.
Semantic quantization discretizes representations via residual vector quantization.
Multi-mask pretraining uses diverse strategies to enhance sequential understanding.
🔎 Similar Papers
2024-05-12International Conference on Information and Knowledge ManagementCitations: 60
Haofeng Huang
Haofeng Huang
Tsinghua University
Generative ModelsEfficient Machine LearningMachine Learning System
L
Ling Gai
University of Shanghai for Science and Technology, Shanghai, China