π€ AI Summary
Neural collaborative filtering (NCF) models suffer from performance degradation in dynamic recommendation scenarios due to data distribution shift, while existing incremental learning approaches struggle to adapt to sparse recommendation tasks. To address this, we propose MEGGβa model-agnostic experience replay framework. Its core innovation is GGscore, a gradient-based influence metric that quantifies sample importance for efficient selection of high-value historical samples during replay, thereby mitigating catastrophic forgetting. MEGG imposes no architectural constraints and can be seamlessly integrated into diverse neural recommenders. Extensive experiments across three state-of-the-art NCF models and four benchmark datasets demonstrate that MEGG consistently outperforms existing SOTA methods, delivering significant improvements in recommendation accuracy, training efficiency, generalization capability, and robustness to distribution shifts.
π Abstract
Neural Collaborative Filtering models are widely used in recommender systems but are typically trained under static settings, assuming fixed data distributions. This limits their applicability in dynamic environments where user preferences evolve. Incremental learning offers a promising solution, yet conventional methods from computer vision or NLP face challenges in recommendation tasks due to data sparsity and distinct task paradigms. Existing approaches for neural recommenders remain limited and often lack generalizability. To address this, we propose MEGG, Replay Samples with Maximally Extreme GGscore, an experience replay based incremental learning framework. MEGG introduces GGscore, a novel metric that quantifies sample influence, enabling the selective replay of highly influential samples to mitigate catastrophic forgetting. Being model-agnostic, MEGG integrates seamlessly across architectures and frameworks. Experiments on three neural models and four benchmark datasets show superior performance over state-of-the-art baselines, with strong scalability, efficiency, and robustness. Implementation will be released publicly upon acceptance.