Continual Low-Rank Adapters for LLM-based Generative Recommender Systems

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face persistent challenges in generative recommendation due to the continual evolution of user preferences; conventional LoRA-based continual learning methods prioritize preserving historical performance but overlook the core objective—predicting future interests—where outdated preferences may degrade recommendation quality. Method: We propose a LoRA-based continual learning framework grounded in proximal regularization. By anchoring to the most recent adapter state, it imposes data-aware directional constraints within the low-rank subspace, enabling adaptive acquisition of new knowledge while selectively retaining relevant historical knowledge. The approach requires only a single lightweight adapter for efficient incremental training. Results: Extensive experiments across multiple recommendation benchmarks demonstrate that our method significantly outperforms mainstream LoRA baselines, achieving substantial improvements in modeling recent user behavior and recommendation accuracy.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) achieve strong performance in recommendation, they face challenges in continual learning as users, items, and user preferences evolve over time. Existing LoRA-based continual methods primarily focus on preserving performance on previous tasks, but this overlooks the unique nature of recommendation: the goal is not to predict past preferences, and outdated preferences can even harm performance when current interests shift significantly. To address this, we propose PESO (Proximally rEgularized Single evolving lOra, a continual adaptation method for LoRA in recommendation. PESO introduces a proximal regularizer that anchors the current adapter to its most recent frozen state, enabling the model to flexibly balance adaptation and preservation, and to better capture recent user behaviors. Theoretically, we show that this proximal design provides data-aware, direction-wise guidance in the LoRA subspace. Empirically, PESO consistently outperforms existing LoRA-based continual learning methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses continual learning challenges in LLM-based recommender systems
Overcomes limitations of preserving outdated user preferences in recommendations
Enables flexible adaptation to evolving user behaviors over time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proximally regularized single evolving LoRA adapter
Balances adaptation and preservation in recommendations
Anchors current adapter to recent frozen state