🤖 AI Summary
To address weak semantic modeling of infrequent items and insufficient interaction sequence diversity in generative recommendation, this paper proposes a multi-identifier item tokenization and data-influence-driven curriculum pretraining framework. Methodologically, it introduces (1) a novel multi-identifier tokenization mechanism based on RQ-VAE with multi-checkpoint collaboration, which maps each item to a set of semantically correlated tokens rather than a single token; and (2) a data-influence-estimated curriculum learning strategy that dynamically adjusts sampling probabilities across multiple tokenized sequences. The framework adopts a two-stage training paradigm: multi-tokenizer pretraining followed by single-tokenizer fine-tuning. Evaluated on three public benchmarks, it achieves 12.7%–18.3% improvements in recommendation performance, significantly enhancing modeling capability for long user sequences and large-scale item catalogs. The approach demonstrates strong generalizability and scalability.
📝 Abstract
Generative recommendation autoregressively generates item identifiers to recommend potential items. Existing methods typically adopt a one-to-one mapping strategy, where each item is represented by a single identifier. However, this scheme poses issues, such as suboptimal semantic modeling for low-frequency items and limited diversity in token sequence data. To overcome these limitations, we propose MTGRec, which leverages Multi-identifier item Tokenization to augment token sequence data for Generative Recommender pre-training. Our approach involves two key innovations: multi-identifier item tokenization and curriculum recommender pre-training. For multi-identifier item tokenization, we leverage the RQ-VAE as the tokenizer backbone and treat model checkpoints from adjacent training epochs as semantically relevant tokenizers. This allows each item to be associated with multiple identifiers, enabling a single user interaction sequence to be converted into several token sequences as different data groups. For curriculum recommender pre-training, we introduce a curriculum learning scheme guided by data influence estimation, dynamically adjusting the sampling probability of each data group during recommender pre-training. After pre-training, we fine-tune the model using a single tokenizer to ensure accurate item identification for recommendation. Extensive experiments on three public benchmark datasets demonstrate that MTGRec significantly outperforms both traditional and generative recommendation baselines in terms of effectiveness and scalability.