OpenGU: A Comprehensive Benchmark for Graph Unlearning

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph unlearning (GU) lacks a standardized evaluation benchmark that simultaneously accounts for graph structural properties, GNN mechanisms, diverse downstream tasks, and flexible unlearning requests—hindering fair algorithmic comparison and progress. Method: We introduce OpenGU, the first open-source GU benchmark tailored for GNNs. It establishes a unified evaluation framework integrating 16 state-of-the-art unlearning algorithms, 37 cross-domain graph datasets, and 13 GNN backbones. OpenGU features a message-passing–based multi-task-compatible API and a modular unlearning request parser. Contribution/Results: Comprehensive experiments yield eight key empirical findings that expose fundamental limitations of existing methods. OpenGU fills a critical gap in the field, providing a reproducible, scalable, and standardized platform for evaluating GU algorithms—advancing privacy-compliant and dynamically evolving graph learning.

Technology Category

Application Category

📝 Abstract
Graph Machine Learning is essential for understanding and analyzing relational data. However, privacy-sensitive applications demand the ability to efficiently remove sensitive information from trained graph neural networks (GNNs), avoiding the unnecessary time and space overhead caused by retraining models from scratch. To address this issue, Graph Unlearning (GU) has emerged as a critical solution, with the potential to support dynamic graph updates in data management systems and enable scalable unlearning in distributed data systems while ensuring privacy compliance. Unlike machine unlearning in computer vision or other fields, GU faces unique difficulties due to the non-Euclidean nature of graph data and the recursive message-passing mechanism of GNNs. Additionally, the diversity of downstream tasks and the complexity of unlearning requests further amplify these challenges. Despite the proliferation of diverse GU strategies, the absence of a benchmark providing fair comparisons for GU, and the limited flexibility in combining downstream tasks and unlearning requests, have yielded inconsistencies in evaluations, hindering the development of this domain. To fill this gap, we present OpenGU, the first GU benchmark, where 16 SOTA GU algorithms and 37 multi-domain datasets are integrated, enabling various downstream tasks with 13 GNN backbones when responding to flexible unlearning requests. Based on this unified benchmark framework, we are able to provide a comprehensive and fair evaluation for GU. Through extensive experimentation, we have drawn $8$ crucial conclusions about existing GU methods, while also gaining valuable insights into their limitations, shedding light on potential avenues for future research.
Problem

Research questions and friction points this paper is trying to address.

Graph Forgetting
Comprehensive Platform
Evaluation Framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

OpenGU
Graph Forgetting Updates
Comprehensive Evaluation Platform
🔎 Similar Papers
No similar papers found.