🤖 AI Summary
Graph forgetting in Graph Neural Networks (GNNs) aims to securely remove outdated, erroneous, or privacy-sensitive nodes/edges; however, existing methods suffer from incomplete forgetting, over-forgetting, or degraded generalization due to neglecting task heterogeneity and biased neighbor influence modeling. This paper proposes Adaptive Graph Unlearning (AGU), the first framework integrating gradient sensitivity analysis, architecture-aware neighbor propagation modeling, and importance-prioritized retraining. AGU dynamically adapts to diverse forgetting tasks and GNN architectures (e.g., GCN, GAT, GraphSAGE), precisely identifying and weighting critical neighbors affected by removed elements to enable selective and complete unlearning. Evaluated on seven real-world graph datasets, AGU improves forgetting accuracy by 12.6–28.3%, reduces inference overhead by 41%, and incurs less than 1.5% model accuracy degradation.
📝 Abstract
Graph unlearning, which deletes graph elements such as nodes and edges from trained graph neural networks (GNNs), is crucial for real-world applications where graph data may contain outdated, inaccurate, or privacy-sensitive information. However, existing methods often suffer from (1) incomplete or over unlearning due to neglecting the distinct objectives of different unlearning tasks, and (2) inaccurate identification of neighbors affected by deleted elements across various GNN architectures. To address these limitations, we propose AGU, a novel Adaptive Graph Unlearning framework that flexibly adapts to diverse unlearning tasks and GNN architectures. AGU ensures the complete forgetting of deleted elements while preserving the integrity of the remaining graph. It also accurately identifies affected neighbors for each GNN architecture and prioritizes important ones to enhance unlearning performance. Extensive experiments on seven real-world graphs demonstrate that AGU outperforms existing methods in terms of effectiveness, efficiency, and unlearning capability.