π€ AI Summary
Existing graph unlearning methods are limited to unsigned graphs and neglect structural constraints imposed by positive/negative edges in signed graphs, leading to imbalanced subgraph partitioning and degraded unlearning performance. To address this, we propose SGUβthe first unlearning framework tailored for signed graphs. Its core innovations are: (1) a sign-aware balanced subgraph partitioning paradigm that explicitly models edge signs while preserving structural balance; and (2) an incremental GNN update mechanism incorporating signed-graph constraints to jointly mitigate forgetting-induced distortions. Extensive experiments on multiple benchmark datasets demonstrate that SGU significantly outperforms state-of-the-art methods, achieving fast, robust, and accurate unlearning on signed graphs while maintaining high model fidelity. SGU is the first approach to effectively extend graph unlearning to signed network scenarios.
π Abstract
The proliferation of signed networks in contemporary social media platforms necessitates robust privacy-preserving mechanisms. Graph unlearning, which aims to eliminate the influence of specific data points from trained models without full retraining, becomes particularly critical in these scenarios where user interactions are sensitive and dynamic. Existing graph unlearning methodologies are exclusively designed for unsigned networks and fail to account for the unique structural properties of signed graphs. Their naive application to signed networks neglects edge sign information, leading to structural imbalance across subgraphs and consequently degrading both model performance and unlearning efficiency. This paper proposes SGU (Signed Graph Unlearning), a graph unlearning framework specifically for signed networks. SGU incorporates a new graph unlearning partition paradigm and a novel signed network partition algorithm that preserve edge sign information during partitioning and ensure structural balance across partitions. Compared with baselines, SGU achieves state-of-the-art results in both model performance and unlearning efficiency.