UniGRec: Unified Generative Recommendation with Soft Identifiers for End-to-End Optimization

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in generative recommendation—namely, the decoupling between tokenizers and recommenders, train-inference inconsistency, identifier collapse, and insufficient collaborative signals—by proposing an end-to-end jointly optimized framework. The approach employs learnable soft identifiers for unified modeling and integrates three core mechanisms: annealed inference alignment to mitigate train-inference discrepancy, codebook uniformity regularization to prevent identifier collapse, and dual collaborative distillation to alleviate semantic overfitting. Extensive experiments on multiple real-world datasets demonstrate that the proposed method significantly outperforms current state-of-the-art models, confirming its effectiveness and robustness.

Technology Category

Application Category

📝 Abstract
Generative recommendation has recently emerged as a transformative paradigm that directly generates target items, surpassing traditional cascaded approaches. It typically involves two components: a tokenizer that learns item identifiers and a recommender trained on them. Existing methods often decouple tokenization from recommendation or rely on asynchronous alternating optimization, limiting full end-to-end alignment. To address this, we unify the tokenizer and recommender under the ultimate recommendation objective via differentiable soft item identifiers, enabling joint end-to-end training. However, this introduces three challenges: training-inference discrepancy due to soft-to-hard mismatch, item identifier collapse from codeword usage imbalance, and collaborative signal deficiency due to an overemphasis on fine-grained token-level semantics. To tackle these challenges, we propose UniGRec, a unified generative recommendation framework that addresses them from three perspectives. UniGRec employs Annealed Inference Alignment during tokenization to smoothly bridge soft training and hard inference, a Codeword Uniformity Regularization to prevent identifier collapse and encourage codebook diversity, and a Dual Collaborative Distillation mechanism that distills collaborative priors from a lightweight teacher model to jointly guide both the tokenizer and the recommender. Extensive experiments on real-world datasets demonstrate that UniGRec consistently outperforms state-of-the-art baseline methods. Our codes are available at https://github.com/Jialei-03/UniGRec.
Problem

Research questions and friction points this paper is trying to address.

generative recommendation
end-to-end optimization
soft identifiers
tokenizer-recommender alignment
collaborative signal deficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Recommendation
Soft Item Identifiers
End-to-End Optimization
Codeword Uniformity
Collaborative Distillation
🔎 Similar Papers
No similar papers found.