MEGG: Replay via Maximally Extreme GGscore in Incremental Learning for Neural Recommendation Models

πŸ“… 2025-09-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Neural collaborative filtering (NCF) models suffer from performance degradation in dynamic recommendation scenarios due to data distribution shift, while existing incremental learning approaches struggle to adapt to sparse recommendation tasks. To address this, we propose MEGGβ€”a model-agnostic experience replay framework. Its core innovation is GGscore, a gradient-based influence metric that quantifies sample importance for efficient selection of high-value historical samples during replay, thereby mitigating catastrophic forgetting. MEGG imposes no architectural constraints and can be seamlessly integrated into diverse neural recommenders. Extensive experiments across three state-of-the-art NCF models and four benchmark datasets demonstrate that MEGG consistently outperforms existing SOTA methods, delivering significant improvements in recommendation accuracy, training efficiency, generalization capability, and robustness to distribution shifts.

Technology Category

Application Category

πŸ“ Abstract
Neural Collaborative Filtering models are widely used in recommender systems but are typically trained under static settings, assuming fixed data distributions. This limits their applicability in dynamic environments where user preferences evolve. Incremental learning offers a promising solution, yet conventional methods from computer vision or NLP face challenges in recommendation tasks due to data sparsity and distinct task paradigms. Existing approaches for neural recommenders remain limited and often lack generalizability. To address this, we propose MEGG, Replay Samples with Maximally Extreme GGscore, an experience replay based incremental learning framework. MEGG introduces GGscore, a novel metric that quantifies sample influence, enabling the selective replay of highly influential samples to mitigate catastrophic forgetting. Being model-agnostic, MEGG integrates seamlessly across architectures and frameworks. Experiments on three neural models and four benchmark datasets show superior performance over state-of-the-art baselines, with strong scalability, efficiency, and robustness. Implementation will be released publicly upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in incremental neural recommendation models
Overcomes data sparsity challenges in dynamic recommendation environments
Proposes model-agnostic framework for replay-based incremental learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GGscore metric to quantify sample influence
Selectively replays influential samples to prevent forgetting
Model-agnostic framework compatible with various architectures
πŸ”Ž Similar Papers
Y
Yunxiao Shi
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.
S
Shuo Yang
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.
Haimin Zhang
Haimin Zhang
University of Technology Sydney
pattern recognitionmachine learningcomputer visionmultimedia analytics
L
Li Wang
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.
Y
Yongze Wang
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.
Q
Qiang Wu
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.
M
Min Xu
School of Electrical and Data Engineering, University of Technology Sydney, Broadway, Sydney, 2007, NSW, Australia.