Enabling Group Fairness in Graph Unlearning via Bi-level Debiasing

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In graph forgetting learning, deleting user information often exacerbates model reliance on sensitive attributes, leading to group-level unfairness. This work is the first to systematically identify and analyze this bias amplification phenomenon. We propose Fair Graph Unlearning (FGU), a novel framework that jointly addresses privacy preservation, predictive accuracy, and fairness. FGU partitions the graph into subgraphs to isolate sensitive information, introduces a sensitive-attribute-aware fairness regularizer, and enforces cross-subgraph model alignment to achieve coordinated debiasing at both subgraph- and global levels. Evaluated on multiple benchmark graph datasets, FGU reduces fairness metrics—including demographic parity difference (DPD) and equalized odds difference (EODD)—by an average of 37.2%, incurs less than 1.5% accuracy degradation post-unlearning, and demonstrates robustness to heterogeneous unlearning requests. The method thus achieves a principled three-way trade-off among privacy, utility, and group fairness.

Technology Category

Application Category

📝 Abstract
Graph unlearning is a crucial approach for protecting user privacy by erasing the influence of user data on trained graph models. Recent developments in graph unlearning methods have primarily focused on maintaining model prediction performance while removing user information. However, we have observed that when user information is deleted from the model, the prediction distribution across different sensitive groups often changes. Furthermore, graph models are shown to be prone to amplifying biases, making the study of fairness in graph unlearning particularly important. This raises the question: Does graph unlearning actually introduce bias? Our findings indicate that the predictions of post-unlearning models become highly correlated with sensitive attributes, confirming the introduction of bias in the graph unlearning process. To address this issue, we propose a fair graph unlearning method, FGU. To guarantee privacy, FGU trains shard models on partitioned subgraphs, unlearns the requested data from the corresponding subgraphs, and retrains the shard models on the modified subgraphs. To ensure fairness, FGU employs a bi-level debiasing process: it first enables shard-level fairness by incorporating a fairness regularizer in the shard model retraining, and then achieves global-level fairness by aligning all shard models to minimize global disparity. Our experiments demonstrate that FGU achieves superior fairness while maintaining privacy and accuracy. Additionally, FGU is robust to diverse unlearning requests, ensuring fairness and utility performance across various data distributions.
Problem

Research questions and friction points this paper is trying to address.

Graph unlearning introduces bias in prediction distribution
Fairness in graph unlearning is crucial but overlooked
Existing methods lack group fairness during data removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses shard models on partitioned subgraphs
Incorporates fairness regularizer in retraining
Aligns shard models to minimize global disparity
Yezi Liu
Yezi Liu
University of California Irvine
Trustworthy Machine LearningGraph Neural NetworksTrustworthy LLMs
P
Prathyush Poduval
Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697 USA
W
Wenjun Huang
Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697 USA
Y
Yang Ni
Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697 USA
Hanning Chen
Hanning Chen
University of California, Irvine
Computer ArchitectureFPGAMachine Learning
Mohsen Imani
Mohsen Imani
Associate Professor, University of California Irvine
Machine LearningBrain-Inspired SystemsIntelligent Systems