SPRInG: Continual LLM Personalization via Selective Parametric Adaptation and Retrieval-Interpolated Generation

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a semi-parametric continual personalization framework to address catastrophic forgetting, noise sensitivity, and personalization failure in large language models under continuously evolving user preferences. The approach identifies genuine preference shifts through novelty scoring and employs drift-driven selective adaptation coupled with a stringent relevance gating mechanism to effectively distinguish long-term preference changes from transient contextual cues. By integrating retrieved historical interactions with parametric knowledge, the framework leverages likelihood scoring, user-specific adapters, a replay buffer, and logit interpolation to enable robust generation. Evaluated on long-text personalization benchmarks, the method significantly outperforms existing approaches, demonstrating its effectiveness and robustness in dynamic interactive scenarios.

Technology Category

Application Category

📝 Abstract
Personalizing Large Language Models typically relies on static retrieval or one-time adaptation, assuming user preferences remain invariant over time. However, real-world interactions are dynamic, where user interests continuously evolve, posing a challenge for models to adapt to preference drift without catastrophic forgetting. Standard continual learning approaches often struggle in this context, as they indiscriminately update on noisy interaction streams, failing to distinguish genuine preference shifts from transient contexts. To address this, we introduce SPRInG, a novel semi-parametric framework designed for effective continual personalization. During training, SPRInG employs drift-driven selective adaptation, which utilizes a likelihood-based scoring function to identify high-novelty interactions. This allows the model to selectively update the user-specific adapter on drift signals while preserving hard-to-learn residuals in a replay buffer. During inference, we apply strict relevance gating and fuse parametric knowledge with retrieved history via logit interpolation. Experiments on the long-form personalized generation benchmark demonstrate that SPRInG outperforms existing baselines, validating its robustness for real-world continual personalization.
Problem

Research questions and friction points this paper is trying to address.

continual personalization
preference drift
catastrophic forgetting
large language models
dynamic user preferences
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual personalization
selective parametric adaptation
preference drift
retrieval-interpolated generation
semi-parametric framework
🔎 Similar Papers
No similar papers found.