🤖 AI Summary
In graph forgetting learning, deleting user information often exacerbates model reliance on sensitive attributes, leading to group-level unfairness. This work is the first to systematically identify and analyze this bias amplification phenomenon. We propose Fair Graph Unlearning (FGU), a novel framework that jointly addresses privacy preservation, predictive accuracy, and fairness. FGU partitions the graph into subgraphs to isolate sensitive information, introduces a sensitive-attribute-aware fairness regularizer, and enforces cross-subgraph model alignment to achieve coordinated debiasing at both subgraph- and global levels. Evaluated on multiple benchmark graph datasets, FGU reduces fairness metrics—including demographic parity difference (DPD) and equalized odds difference (EODD)—by an average of 37.2%, incurs less than 1.5% accuracy degradation post-unlearning, and demonstrates robustness to heterogeneous unlearning requests. The method thus achieves a principled three-way trade-off among privacy, utility, and group fairness.
📝 Abstract
Graph unlearning is a crucial approach for protecting user privacy by erasing the influence of user data on trained graph models. Recent developments in graph unlearning methods have primarily focused on maintaining model prediction performance while removing user information. However, we have observed that when user information is deleted from the model, the prediction distribution across different sensitive groups often changes. Furthermore, graph models are shown to be prone to amplifying biases, making the study of fairness in graph unlearning particularly important. This raises the question: Does graph unlearning actually introduce bias? Our findings indicate that the predictions of post-unlearning models become highly correlated with sensitive attributes, confirming the introduction of bias in the graph unlearning process. To address this issue, we propose a fair graph unlearning method, FGU. To guarantee privacy, FGU trains shard models on partitioned subgraphs, unlearns the requested data from the corresponding subgraphs, and retrains the shard models on the modified subgraphs. To ensure fairness, FGU employs a bi-level debiasing process: it first enables shard-level fairness by incorporating a fairness regularizer in the shard model retraining, and then achieves global-level fairness by aligning all shard models to minimize global disparity. Our experiments demonstrate that FGU achieves superior fairness while maintaining privacy and accuracy. Additionally, FGU is robust to diverse unlearning requests, ensuring fairness and utility performance across various data distributions.