Tokenize Once, Recommend Anywhere: Unified Item Tokenization for Multi-domain LLM-based Recommendation

πŸ“… 2025-11-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM-based recommender systems rely on domain-specific tokenization, suffering from poor generalizability and difficulty in unifying semantic representations across multiple domains. Method: We propose UniTokβ€”a unified tokenization framework that enables cross-domain tokenization without retraining. UniTok constructs a shared latent space via a shared encoder and multi-codebook discretization; incorporates a Mixture-of-Experts (MoE) architecture to jointly model domain-invariant patterns and domain-specific characteristics; and introduces a mutual information calibration mechanism to mitigate semantic imbalance across domains. Contribution/Results: Evaluated on multiple real-world datasets, UniTok achieves up to 51.89% improvement in recommendation performance. It significantly enhances model generalizability and cross-domain robustness, establishing a scalable, theoretically interpretable paradigm for unified tokenization in LLM-driven universal recommendation.

Technology Category

Application Category

πŸ“ Abstract
Large language model (LLM)-based recommender systems have achieved high-quality performance by bridging the discrepancy between the item space and the language space through item tokenization. However, existing item tokenization methods typically require training separate models for each item domain, limiting generalization. Moreover, the diverse distributions and semantics across item domains make it difficult to construct a unified tokenization that preserves domain-specific information. To address these challenges, we propose UniTok, a Unified item Tokenization framework that integrates our own mixture-of-experts (MoE) architecture with a series of codebooks to convert items into discrete tokens, enabling scalable tokenization while preserving semantic information across multiple item domains. Specifically, items from different domains are first projected into a unified latent space through a shared encoder. They are then routed to domain-specific experts to capture the unique semantics, while a shared expert, which is always active, encodes common knowledge transferable across domains. Additionally, to mitigate semantic imbalance across domains, we present a mutual information calibration mechanism, which guides the model towards retaining similar levels of semantic information for each domain. Comprehensive experiments on wide-ranging real-world datasets demonstrate that the proposed UniTok framework is (a) highly effective: achieving up to 51.89% improvements over strong benchmarks, (b) theoretically sound: showing the analytical validity of our architectural design and optimization; and (c) highly generalizable: demonstrating robust performance across diverse domains without requiring per-domain retraining, a capability not supported by existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Unified tokenization for multi-domain LLM recommendations
Preserving domain-specific semantics in unified tokenization
Eliminating per-domain retraining in cross-domain recommendation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified tokenization framework using mixture-of-experts architecture
Shared encoder projects items into unified latent space
Mutual information calibration balances semantic information across domains
πŸ”Ž Similar Papers
No similar papers found.