An Efficient LLM-based Evolutional Recommendation with Locate-Forget-Update Paradigm

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and severe parameter forgetting in large language model (LLM)-based recommender systems under dynamic user preference evolution, this paper proposes EvoRec, an efficient incremental update framework. Its core innovation is the “Locate–Forget–Update” paradigm: leveraging a difference-aware mechanism to precisely identify LoRA submodules most relevant to preference shifts, thereby updating only ~30% of adapter parameters while jointly optimizing interest modeling for both active and inactive users. This approach eliminates full retraining, substantially reducing computational cost while effectively mitigating catastrophic forgetting. Extensive experiments on two real-world datasets demonstrate that EvoRec significantly outperforms existing baselines in both evolutionary efficiency and recommendation accuracy.

Technology Category

Application Category

📝 Abstract
Nowadays, Large Language Models (LLMs) have shown exceptional performance in sequential recommendations, and the adoption of LLM-based recommender systems (LLMRec) is becoming increasingly widespread in existing e-commerce platforms. Despite the impressive performance, the constant high volume of new user-item interactions makes it difficult to adapt to the evolution of user preference over time, especially for LLM-based recommender systems. The challenge arises from the large number of parameters in LLMs, which makes traditional evolution methods (i.e., Re-training or Fine-tuning) impractical. Specifically, Re-training with all interactions results in prohibitively high computational costs. On the other hand, fine-tuning with only new interactions leads to preference forgetting among inactive users, ultimately compromising overall performance. To tackle this problem, we propose EvoRec, an efficient Locate-Forget-Update framework designed for LLM-based recommender systems to model the evolution of user preferences. EvoRec identifies a small set of parameters associated with preference changes and updates them precisely, thereby saving computational resources while maintaining strong recommendation performance. Notably, the modified parameters account for only 30% of LoRA adapter parameters, with no additional parameters introduced. Extensive experiments on two real-world datasets demonstrate that, compared to existing methods, EvoRec not only efficiently evolves LLMRec to adapt to the preferences of active users, but also preserves the interests of inactive users from being disturbed during evolution.
Problem

Research questions and friction points this paper is trying to address.

Efficiently adapting LLM-based recommenders to evolving user preferences
Reducing computational costs of retraining large language models for recommendations
Preventing preference forgetting in inactive users during model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Locate-Forget-Update framework for LLM evolution
Updates only 30% of LoRA adapter parameters
Identifies and updates parameters for preference changes
🔎 Similar Papers
No similar papers found.
H
Hao Liu
Hefei University of Technology, China
Le Wu
Le Wu
Hefei University of Technology
recommender systemsuser modelingexplainabilty and fairness in recommendation
Min Hou
Min Hou
Hefei University of Technology
H
Han Wu
Hefei University of Technology, China
K
Kun Zhang
Hefei University of Technology, China
X
Xin Li
IFLYTEK Research, China
S
Si Wei
IFLYTEK Research, China