SODA: Semantic-Oriented Distributional Alignment for Generative Recommendation

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first generative recommendation framework based on distribution-level supervision, addressing the limitations of existing methods that rely on discrete token-level supervision—namely, information loss and the inability to jointly optimize the tokenizer and recommendation model. By constructing soft probability distributions over multi-layer codebooks and aligning them with semantically rich targets via negative KL divergence, the framework enables end-to-end differentiable training. It introduces a semantic-aware distribution alignment mechanism, seamlessly integrated with Bayesian Personalized Ranking (BPR) contrastive learning, yielding a plug-and-play, highly generalizable supervision paradigm. Extensive experiments across multiple real-world datasets demonstrate consistent and significant performance improvements over diverse backbone models, validating the effectiveness and broad applicability of the proposed approach.

Technology Category

Application Category

📝 Abstract
Generative recommendation has emerged as a scalable alternative to traditional retrieve-and-rank pipelines by operating in a compact token space. However, existing methods mainly rely on discrete code-level supervision, which leads to information loss and limits the joint optimization between the tokenizer and the generative recommender. In this work, we propose a distribution-level supervision paradigm that leverages probability distributions over multi-layer codebooks as soft and information-rich representations. Building on this idea, we introduce Semantic-Oriented Distributional Alignment (SODA), a plug-and-play contrastive supervision framework based on Bayesian Personalized Ranking, which aligns semantically rich distributions via negative KL divergence while enabling end-to-end differentiable training. Extensive experiments on multiple real-world datasets demonstrate that SODA consistently improves the performance of various generative recommender backbones, validating its effectiveness and generality. Codes will be available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

generative recommendation
discrete code-level supervision
information loss
joint optimization
tokenizer
Innovation

Methods, ideas, or system contributions that make the work stand out.

distributional alignment
generative recommendation
semantic supervision
contrastive learning
end-to-end training
🔎 Similar Papers
No similar papers found.