🤖 AI Summary
Node-level forgetting in graph neural networks (GNNs) lacks efficient, retraining-free solutions; existing graph influence functions (GIFs) struggle with complex node removal requests. This paper proposes the first node-forgetting framework grounded in embedding reconstruction and range-null space decomposition: it inversely models the GNN’s aggregation process to reconstruct embeddings of forgotten nodes and decouples perturbations within the null space to preserve model utility. Crucially, it introduces range-null space decomposition to graph forgetting—breaking the longstanding limitation of conventional influence functions, which are confined to edge-level or local perturbations. Evaluated on multiple benchmark datasets, our method achieves state-of-the-art forgetting performance while ensuring high efficiency, strong utility preservation, and full retraining-free operation.
📝 Abstract
Graph unlearning is tailored for GNNs to handle widespread and various graph structure unlearning requests, which remain largely unexplored. The GIF (graph influence function) achieves validity under partial edge unlearning, but faces challenges in dealing with more disturbing node unlearning. To avoid the overhead of retraining and realize the model utility of unlearning, we proposed a novel node unlearning method to reverse the process of aggregation in GNN by embedding reconstruction and to adopt Range-Null Space Decomposition for the nodes' interaction learning. Experimental results on multiple representative datasets demonstrate the SOTA performance of our proposed approach.