🤖 AI Summary
To address the high computational overhead and severe parameter forgetting in large language model (LLM)-based recommender systems under dynamic user preference evolution, this paper proposes EvoRec, an efficient incremental update framework. Its core innovation is the “Locate–Forget–Update” paradigm: leveraging a difference-aware mechanism to precisely identify LoRA submodules most relevant to preference shifts, thereby updating only ~30% of adapter parameters while jointly optimizing interest modeling for both active and inactive users. This approach eliminates full retraining, substantially reducing computational cost while effectively mitigating catastrophic forgetting. Extensive experiments on two real-world datasets demonstrate that EvoRec significantly outperforms existing baselines in both evolutionary efficiency and recommendation accuracy.
📝 Abstract
Nowadays, Large Language Models (LLMs) have shown exceptional performance in sequential recommendations, and the adoption of LLM-based recommender systems (LLMRec) is becoming increasingly widespread in existing e-commerce platforms. Despite the impressive performance, the constant high volume of new user-item interactions makes it difficult to adapt to the evolution of user preference over time, especially for LLM-based recommender systems. The challenge arises from the large number of parameters in LLMs, which makes traditional evolution methods (i.e., Re-training or Fine-tuning) impractical. Specifically, Re-training with all interactions results in prohibitively high computational costs. On the other hand, fine-tuning with only new interactions leads to preference forgetting among inactive users, ultimately compromising overall performance. To tackle this problem, we propose EvoRec, an efficient Locate-Forget-Update framework designed for LLM-based recommender systems to model the evolution of user preferences. EvoRec identifies a small set of parameters associated with preference changes and updates them precisely, thereby saving computational resources while maintaining strong recommendation performance. Notably, the modified parameters account for only 30% of LoRA adapter parameters, with no additional parameters introduced. Extensive experiments on two real-world datasets demonstrate that, compared to existing methods, EvoRec not only efficiently evolves LLMRec to adapt to the preferences of active users, but also preserves the interests of inactive users from being disturbed during evolution.