π€ AI Summary
This work addresses the severe performance degradation of generative recommender models in cold-start scenarios, where existing retraining approaches are hindered by sparse user feedback, high computational costs, and update latency. The authors introduce GenRecEdit, the first training-free model editing framework tailored for recommendation systems. By modeling the relationship between sequential context and next-token generation, GenRecEdit employs an iterative token-level editing strategy coupled with a one-to-one triggering mechanism to effectively resolve two key challenges: the lack of subject-object binding and instability in multi-token representations inherent in generative recommenders. Extensive experiments demonstrate that GenRecEdit significantly improves recommendation accuracy for cold-start items across multiple datasets while preserving the original modelβs performance on established items, achieving these gains with only approximately 9.5% of the time required for full retraining, thereby substantially enhancing model update efficiency.
π Abstract
Generative recommendation (GR) has shown strong potential for sequential recommendation in an end-to-end generation paradigm. However, existing GR models suffer from severe cold-start collapse: their recommendation accuracy on cold-start items can drop to near zero. Current solutions typically rely on retraining with cold-start interactions, which is hindered by sparse feedback, high computational cost, and delayed updates, limiting practical utility in rapidly evolving recommendation catalogs. Inspired by model editing in NLP, which enables training-free knowledge injection into large language models, we explore how to bring this paradigm to generative recommendation. This, however, faces two key challenges: GR lacks the explicit subject-object binding common in natural language, making targeted edits difficult; and GR does not exhibit stable token co-occurrence patterns, making the injection of multi-token item representations unreliable. To address these challenges, we propose GenRecEdit, a model editing framework tailored for generative recommendation. GenRecEdit explicitly models the relationship between the full sequence context and next-token generation, adopts iterative token-level editing to inject multi-token item representations, and introduces a one-to-one trigger mechanism to reduce interference among multiple edits during inference. Extensive experiments on multiple datasets show that GenRecEdit substantially improves recommendation performance on cold-start items while preserving the model's original recommendation quality. Moreover, it achieves these gains using only about 9.5% of the training time required for retraining, enabling more efficient and frequent model updates.