🤖 AI Summary
This work addresses the misalignment between semantic ID-based recommendation methods and generative objectives, which leads to weakly coupled collaborative prediction and inefficient sequence modeling. To overcome these limitations, the authors propose ReSID, a framework featuring recommendation-native encoding and quantization mechanisms that enable efficient generative recommendation without reliance on large language models. The core innovations include Field-Aware Masked Autoencoding (FAMAE) to enhance predictive representation learning and Globally Aligned Orthogonal Quantization (GAOQ) to produce low-uncertainty, compact semantic ID sequences. Evaluated across ten benchmark datasets, ReSID consistently outperforms strong baselines by over 10% on average and reduces tokenization costs by up to 122×.
📝 Abstract
Semantic ID (SID)-based recommendation is a promising paradigm for scaling sequential recommender systems, but existing methods largely follow a semantic-centric pipeline: item embeddings are learned from foundation models and discretized using generic quantization schemes. This design is misaligned with generative recommendation objectives: semantic embeddings are weakly coupled with collaborative prediction, and generic quantization is inefficient at reducing sequential uncertainty for autoregressive modeling. To address these, we propose ReSID, a recommendation-native, principled SID framework that rethinks representation learning and quantization from the perspective of information preservation and sequential predictability, without relying on LLMs. ReSID consists of two components: (i) Field-Aware Masked Auto-Encoding (FAMAE), which learns predictive-sufficient item representations from structured features, and (ii) Globally Aligned Orthogonal Quantization (GAOQ), which produces compact and predictable SID sequences by jointly reducing semantic ambiguity and prefix-conditional uncertainty. Theoretical analysis and extensive experiments across ten datasets show the effectiveness of ReSID. ReSID consistently outperforms strong sequential and SID-based generative baselines by an average of over 10%, while reducing tokenization cost by up to 122x. Code is available at https://github.com/FuCongResearchSquad/ReSID.