Pre-training Generative Recommender with Multi-Identifier Item Tokenization

📅 2025-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak semantic modeling of infrequent items and insufficient interaction sequence diversity in generative recommendation, this paper proposes a multi-identifier item tokenization and data-influence-driven curriculum pretraining framework. Methodologically, it introduces (1) a novel multi-identifier tokenization mechanism based on RQ-VAE with multi-checkpoint collaboration, which maps each item to a set of semantically correlated tokens rather than a single token; and (2) a data-influence-estimated curriculum learning strategy that dynamically adjusts sampling probabilities across multiple tokenized sequences. The framework adopts a two-stage training paradigm: multi-tokenizer pretraining followed by single-tokenizer fine-tuning. Evaluated on three public benchmarks, it achieves 12.7%–18.3% improvements in recommendation performance, significantly enhancing modeling capability for long user sequences and large-scale item catalogs. The approach demonstrates strong generalizability and scalability.

Technology Category

Application Category

📝 Abstract
Generative recommendation autoregressively generates item identifiers to recommend potential items. Existing methods typically adopt a one-to-one mapping strategy, where each item is represented by a single identifier. However, this scheme poses issues, such as suboptimal semantic modeling for low-frequency items and limited diversity in token sequence data. To overcome these limitations, we propose MTGRec, which leverages Multi-identifier item Tokenization to augment token sequence data for Generative Recommender pre-training. Our approach involves two key innovations: multi-identifier item tokenization and curriculum recommender pre-training. For multi-identifier item tokenization, we leverage the RQ-VAE as the tokenizer backbone and treat model checkpoints from adjacent training epochs as semantically relevant tokenizers. This allows each item to be associated with multiple identifiers, enabling a single user interaction sequence to be converted into several token sequences as different data groups. For curriculum recommender pre-training, we introduce a curriculum learning scheme guided by data influence estimation, dynamically adjusting the sampling probability of each data group during recommender pre-training. After pre-training, we fine-tune the model using a single tokenizer to ensure accurate item identification for recommendation. Extensive experiments on three public benchmark datasets demonstrate that MTGRec significantly outperforms both traditional and generative recommendation baselines in terms of effectiveness and scalability.
Problem

Research questions and friction points this paper is trying to address.

Improves semantic modeling for low-frequency items
Enhances diversity in token sequence data
Overcomes limitations of one-to-one item identifier mapping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-identifier tokenization using RQ-VAE backbone
Curriculum learning with dynamic data sampling
Fine-tuning with single tokenizer for accuracy
🔎 Similar Papers
No similar papers found.
B
Bowen Zheng
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Enze Liu
Enze Liu
Renmin University of China
Recommender SystemsLarge Language Models
Z
Zhongfu Chen
Possion Lab, Huawei, Beijing, China
Z
Zhongrui Ma
Possion Lab, Huawei, Beijing, China
Y
Yue Wang
Possion Lab, Huawei, Beijing, China
Wayne Xin Zhao
Wayne Xin Zhao
Professor, Renmin University of China
Recommender SystemNatural Language ProcessingLarge Language Model
Ji-Rong Wen
Ji-Rong Wen
Gaoling School of Artificial Intelligence, Renmin University of China
Large Language ModelWeb SearchInformation RetrievalMachine Learning