🤖 AI Summary
To address graph-structural perturbations and performance degradation caused by user data deletion in Graph Neural Network (GNN)-based recommender systems, this paper proposes UnlearnRec—the first model-agnostic pretraining-based unlearning framework. Its core innovation is the Influence Encoder, which directly models the impact of data removal on model parameters, enabling parameter regeneration post-unlearning without full retraining. UnlearnRec integrates self-supervised pretraining with a request-driven parameter mapping mechanism to balance generalizability and response efficiency. Evaluated on public benchmarks, UnlearnRec achieves over a 10× speedup compared to complete retraining, while its unlearning fidelity and recommendation accuracy closely approximate those of full retraining—significantly outperforming both data-partitioning approaches and conventional influence-function methods.
📝 Abstract
Modern recommender systems powered by Graph Neural Networks (GNNs) excel at modeling complex user-item interactions, yet increasingly face scenarios requiring selective forgetting of training data. Beyond user requests to remove specific interactions due to privacy concerns or preference changes, regulatory frameworks mandate recommender systems' ability to eliminate the influence of certain user data from models. This recommendation unlearning challenge presents unique difficulties as removing connections within interaction graphs creates ripple effects throughout the model, potentially impacting recommendations for numerous users. Traditional approaches suffer from significant drawbacks: fragmentation methods damage graph structure and diminish performance, while influence function techniques make assumptions that may not hold in complex GNNs, particularly with self-supervised or random architectures. To address these limitations, we propose a novel model-agnostic pre-training paradigm UnlearnRec that prepares systems for efficient unlearning operations. Our Influence Encoder takes unlearning requests together with existing model parameters and directly produces updated parameters of unlearned model with little fine-tuning, avoiding complete retraining while preserving model performance characteristics. Extensive evaluation on public benchmarks demonstrates that our method delivers exceptional unlearning effectiveness while providing more than 10x speedup compared to retraining approaches. We release our method implementation at: https://github.com/HKUDS/UnlearnRec.