Retrieval and Distill: A Temporal Data Shift-Free Paradigm for Online Recommendation System

📅 2024-04-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Recommender systems suffer from temporal data shift—the misalignment between historical training distributions and online inference distributions—leading to performance degradation. To address this, we propose a two-stage framework: First, leveraging our newly introduced “Associative Temporal Invariance Theorem,” we construct a retrieval-based relevance network that explicitly models temporally stable user-item associations, enabling zero-shot adaptation to distributional drift without retraining. Second, we design a lightweight knowledge distillation mechanism to transfer these temporal-invariant patterns into a low-latency online model. Our approach synergistically integrates retrieval augmentation, temporal invariance modeling, and incremental learning. Extensive experiments on multiple real-world datasets demonstrate significant improvements in recommendation accuracy, with only marginal increases in inference latency. Moreover, the framework consistently enhances model robustness and generalization across dynamic environments.

Technology Category

Application Category

📝 Abstract
Current recommendation systems are significantly affected by a serious issue of temporal data shift, which is the inconsistency between the distribution of historical data and that of online data. Most existing models focus on utilizing updated data, overlooking the transferable, temporal data shift-free information that can be learned from shifting data. We propose the Temporal Invariance of Association theorem, which suggests that given a fixed search space, the relationship between the data and the data in the search space keeps invariant over time. Leveraging this principle, we designed a retrieval-based recommendation system framework that can train a data shift-free relevance network using shifting data, significantly enhancing the predictive performance of the original model in the recommendation system. However, retrieval-based recommendation models face substantial inference time costs when deployed online. To address this, we further designed a distill framework that can distill information from the relevance network into a parameterized module using shifting data. The distilled model can be deployed online alongside the original model, with only a minimal increase in inference time. Extensive experiments on multiple real datasets demonstrate that our framework significantly improves the performance of the original model by utilizing shifting data.
Problem

Research questions and friction points this paper is trying to address.

Address temporal data shift in recommendation systems
Train data shift-free models with shifting data
Reduce inference time cost for online deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Temporal Invariance of Association theorem
Retrieval-based framework for shift-free relevance
Distill framework minimizes online inference time
🔎 Similar Papers
No similar papers found.
L
Lei Zheng
Shanghai Jiao Tong University, Shanghai, China
N
Ning Li
Shanghai Jiao Tong University, Shanghai, China
W
Weinan Zhang
Shanghai Jiao Tong University, Shanghai, China
Yong Yu
Yong Yu
Materials Engineer
Polymer matrix compositeadhesivemodelingtest development