Drift-Aware Continual Tokenization for Generative Recommendation

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of identifier conflicts and collaborative drift in generative recommender systems caused by the continuous introduction of new items and evolving user behaviors, which often necessitate costly and unstable frequent model retraining. To mitigate these issues, the authors propose DACT, a framework that enables collaborative-aware continual tokenization through a two-stage strategy. First, a Collaborative Drift Identification Module (CDIM) distinguishes between drifted and stable items, enabling differential fine-tuning. Subsequently, a hierarchical token reallocation mechanism updates identifier sequences via a loose-to-tight encoding strategy. Integrating a learnable tokenizer with an autoregressive generative model, DACT consistently outperforms baseline methods across three real-world datasets, effectively adapting to dynamic collaborative relationships while minimally interfering with previously acquired knowledge.
📝 Abstract
Generative recommendation commonly adopts a two-stage pipeline in which a learnable tokenizer maps items to discrete token sequences (i.e. identifiers) and an autoregressive generative recommender model (GRM) performs prediction based on these identifiers. Recent tokenizers further incorporate collaborative signals so that items with similar user-behavior patterns receive similar codes, substantially improving recommendation quality. However, real-world environments evolve continuously: new items cause identifier collision and shifts, while new interactions induce collaborative drift in existing items (e.g., changing co-occurrence patterns and popularity). Fully retraining both tokenizer and GRM is often prohibitively expensive, yet naively fine-tuning the tokenizer can alter token sequences for the majority of existing items, undermining the GRM's learned token-embedding alignment. To balance plasticity and stability for collaborative tokenizers, we propose DACT, a Drift-Aware Continual Tokenization framework with two stages: (i) tokenizer fine-tuning, augmented with a jointly trained Collaborative Drift Identification Module (CDIM) that outputs item-level drift confidence and enables differentiated optimization for drifting and stationary items; and (ii) hierarchical code reassignment using a relaxed-to-strict strategy to update token sequences while limiting unnecessary changes. Experiments on three real-world datasets with two representative GRMs show that DACT consistently achieves better performance than baselines, demonstrating effective adaptation to collaborative evolution with reduced disruption to prior knowledge. Our implementation is publicly available at https://github.com/HomesAmaranta/DACT for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

continual learning
tokenization drift
generative recommendation
collaborative drift
identifier collision
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual tokenization
collaborative drift
generative recommendation
drift-aware learning
code reassignment
🔎 Similar Papers
No similar papers found.