🤖 AI Summary
Session-based recommendation systems require “approximate unlearning” of specific training samples—i.e., attenuating their influence without full model retraining.
Method: We propose CAU, a curriculum learning–driven unlearning framework. CAU introduces the first curriculum-based approach to approximate unlearning, featuring dual difficulty metrics (gradient- and embedding-based), a multi-objective optimization objective balancing unlearning strength and recommendation performance, and Pareto-optimal solution selection. It further incorporates difficulty-aware hard/soft sampling strategies to explicitly model sample processing order across multiple requests.
Results: Evaluated on multiple benchmark datasets, CAU significantly outperforms state-of-the-art methods: it efficiently nullifies the impact of target samples while incurring minimal degradation in recommendation performance—averaging only 0.3%–1.2% drop in key metrics.
📝 Abstract
Approximate unlearning for session-based recommendation refers to eliminating the influence of specific training samples from the recommender without retraining of (sub-)models. Gradient ascent (GA) is a representative method to conduct approximate unlearning. However, there still exist dual challenges to apply GA for session-based recommendation. On the one hand, naive applying of GA could lead to degradation of recommendation performance. On the other hand, existing studies fail to consider the ordering of unlearning samples when simultaneously processing multiple unlearning requests, leading to sub-optimal recommendation performance and unlearning effect. To address the above challenges, we introduce CAU, a curriculum approximate unlearning framework tailored to session-based recommendation. CAU handles the unlearning task with a GA term on unlearning samples. Specifically, to address the first challenge, CAU formulates the overall optimization task as a multi-objective optimization problem, where the GA term for unlearning samples is combined with retaining terms for preserving performance. The multi-objective optimization problem is solved through seeking the Pareto-Optimal solution, which achieves effective unlearning with trivial sacrifice on recommendation performance. To tackle the second challenge, CAU adopts a curriculum-based sequence to conduct unlearning on batches of unlearning samples. The key motivation is to perform unlearning from easy samples to harder ones. To this end, CAU first introduces two metrics to measure the unlearning difficulty, including gradient unlearning difficulty and embedding unlearning difficulty. Then, two strategies, hard-sampling and soft-sampling, are proposed to select unlearning samples according to difficulty scores.