TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation

πŸ“… 2024-06-15
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 20
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of efficiently and generally mapping user/item IDs to discrete tokens in LLM-driven generative recommendation, this paper proposes Masked Vector Quantization (MQ) Tokenizerβ€”the first method to discretize collaborative filtering embeddings into LLM-compatible tokens, thereby overcoming limitations of textual prompts and continuous latent vectors in high-order collaborative modeling and cold-start scenarios. We further introduce a non-autoregressive generative retrieval paradigm that eliminates beam search and enables end-to-end top-K recommendation. The approach integrates a VQ-VAE variant, collaborative embedding learning, lightweight adaptation, and prompt engineering. Experiments across multiple benchmarks demonstrate significant improvements over both traditional and LLM-based baselines: 3.2Γ— inference speedup and a 19.7% gain in Recall@10 under cold-start conditions.

Technology Category

Application Category

πŸ“ Abstract
There is a growing interest in utilizing large-scale language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and in-context learning capabilities. In this scenario, tokenizing (i.e., indexing) users and items becomes essential for ensuring a seamless alignment of LLMs with recommendations. While several studies have made progress in representing users and items through textual contents or latent representations, challenges remain in efficiently capturing high-order collaborative knowledge into discrete tokens that are compatible with LLMs. Additionally, the majority of existing tokenization approaches often face difficulties in generalizing effectively to new/unseen users or items that were not in the training corpus. To address these challenges, we propose a novel framework called TokenRec, which introduces not only an effective ID tokenization strategy but also an efficient retrieval paradigm for LLM-based recommendations. Specifically, our tokenization strategy, Masked Vector-Quantized (MQ) Tokenizer, involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving a smooth incorporation of high-order collaborative knowledge and a generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.
Problem

Research questions and friction points this paper is trying to address.

Efficiently tokenizing users/items for LLM-based recommendations
Capturing high-order collaborative knowledge into discrete tokens
Generalizing tokenization to new/unseen users and items
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Vector-Quantized Tokenizer for ID tokenization
Generative retrieval paradigm for efficient recommendations
Incorporates high-order collaborative knowledge into tokens
πŸ”Ž Similar Papers
2024-05-12International Conference on Information and Knowledge ManagementCitations: 60
H
Haohao Qu
Department of Computing, The Hong Kong Polytechnic University
W
Wenqi Fan
Department of Computing (COMP) and Department of Management and Marketing (MM), The Hong Kong Polytechnic University
Z
Zihuai Zhao
Department of Computing, The Hong Kong Polytechnic University
Q
Qing Li
Department of Computing, The Hong Kong Polytechnic University