Learnable Item Tokenization for Generative Recommendation

πŸ“… 2024-05-12
πŸ›οΈ International Conference on Information and Knowledge Management
πŸ“ˆ Citations: 60
✨ Influential: 7
πŸ“„ PDF
πŸ€– AI Summary
This work addresses three key challenges in LLM-driven generative recommendation: coarse-grained semantic representation of items, absence of collaborative signals, and biased codebook allocation in item-to-language-space mapping. To this end, we propose LETTERβ€”the first end-to-end learnable item tokenizer for generative recommendation. LETTER jointly models hierarchical semantics, user-item collaborative relationships, and codebook diversity within a residual-quantized variational autoencoder (RQ-VAE) framework. It incorporates contrastive alignment loss to bridge modality gaps, diversity regularization to ensure balanced codebook usage, and ranking-guided generation loss to directly optimize recommendation performance. Extensive experiments on three public benchmarks demonstrate that LETTER consistently outperforms state-of-the-art baselines, achieving the first SOTA results for LLM-based generative recommendation. Our approach establishes a new paradigm for deep integration of recommender systems and large language models.

Technology Category

Application Category

πŸ“ Abstract
Utilizing powerful Large Language Models (LLMs) for generative recommendation has attracted much attention. Nevertheless, a crucial challenge is transforming recommendation data into the language space of LLMs through effective item tokenization. Current approaches, such as ID, textual, and codebook-based identifiers, exhibit shortcomings in encoding semantic information, incorporating collaborative signals, or handling code assignment bias. To address these limitations, we propose LETTER (a LEarnable Tokenizer for generaTivE Recommendation), which integrates hierarchical semantics, collaborative signals, and code assignment diversity to satisfy the essential requirements of identifiers. LETTER incorporates Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias. We instantiate LETTER on two models and propose a ranking-guided generation loss to augment their ranking ability theoretically. Experiments on three datasets validate the superiority of LETTER, advancing the state-of-the-art in the field of LLM-based generative recommendation.
Problem

Research questions and friction points this paper is trying to address.

Transform recommendation data into LLM language space
Address shortcomings in semantic and collaborative encoding
Mitigate code assignment bias in item tokenization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable tokenization integrating hierarchical semantics and collaborative signals
Residual Quantized VAE with contrastive alignment for semantic regularization
Diversity loss mitigates code assignment bias in generative recommendation
πŸ”Ž Similar Papers
No similar papers found.
W
Wenjie Wang
National University of Singapore
H
Honghui Bao
National University of Singapore
Xinyu Lin
Xinyu Lin
National University of Singapore
recommendation
Jizhi Zhang
Jizhi Zhang
USTC
RecommendationTrustworthy AILarge Personalized Model
Y
Yongqi Li
The Hong Kong Polytechnic University
F
Fuli Feng
University of Science and Technology of China
See-Kiong Ng
See-Kiong Ng
School of Computing and Institute of Data Science, National University of Singapore
artificial intelligencenatural language processingdata miningsmart citiesbioinformatics
T
Tat-Seng Chua
National University of Singapore