Dynamic Graph Unlearning: A General and Efficient Post-Processing Method via Gradient Transformation

πŸ“… 2024-05-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing privacy-compliance requirements for user data β€œunlearning” in Dynamic Graph Neural Networks (DGNNs), this work tackles the efficient forgetting of temporal edges and nodes. Method: We propose the first generic, architecture-agnostic, post-hoc unlearning method that requires no model retraining. We formalize forgetting in dynamic graph settings and introduce a gradient transformation mechanism that directly maps deletion requests to parameter differential updates, enabling incremental handling of arbitrary future requests. Our approach leverages continuous-time dynamic graph modeling and gradient-space projection, ensuring compatibility with mainstream DGNNs (e.g., TGAT, TGN). Results: Evaluated on six real-world datasets, our method achieves an average speedup of 7.23Γ— over retraining, with negligible or even improved utility. For future deletion requests, it accelerates unlearning by up to 32.59Γ—, substantially outperforming static-graph unlearning baselines.

Technology Category

Application Category

πŸ“ Abstract
Dynamic graph neural networks (DGNNs) have emerged and been widely deployed in various web applications (e.g., Reddit) to serve users (e.g., personalized content delivery) due to their remarkable ability to learn from complex and dynamic user interaction data. Despite benefiting from high-quality services, users have raised privacy concerns, such as misuse of personal data (e.g., dynamic user-user/item interaction) for model training, requiring DGNNs to ``forget'' their data to meet AI governance laws (e.g., the ``right to be forgotten'' in GDPR). However, current static graph unlearning studies cannot extit{unlearn dynamic graph elements} and exhibit limitations such as the model-specific design or reliance on pre-processing, which disenable their practicability in dynamic graph unlearning. To this end, we study the dynamic graph unlearning for the first time and propose an effective, efficient, general, and post-processing method to implement DGNN unlearning. Specifically, we first formulate dynamic graph unlearning in the context of continuous-time dynamic graphs, and then propose a method called Gradient Transformation that directly maps the unlearning request to the desired parameter update. Comprehensive evaluations on six real-world datasets and state-of-the-art DGNN backbones demonstrate its effectiveness (e.g., limited drop or obvious improvement in utility) and efficiency (e.g., 7.23$ imes$ speed-up) advantages. Additionally, our method has the potential to handle future unlearning requests with significant performance gains (e.g., 32.59$ imes$ speed-up).
Problem

Research questions and friction points this paper is trying to address.

Dynamic Graph Neural Networks
Data Forgetting
Privacy Protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Graph Neural Networks
Forgetful Learning
Parameter Update Efficiency
πŸ”Ž Similar Papers
No similar papers found.