FairGU: Fairness-aware Graph Unlearning in Social Networks

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical issue that existing graph unlearning methods often inadvertently leak or amplify sensitive attributes when deleting nodes, thereby compromising algorithmic fairness. To mitigate this, we propose FairGU—the first framework that explicitly integrates fairness guarantees into the graph unlearning process. FairGU employs a fairness-aware module in conjunction with a structure-preserving strategy to jointly optimize the removal of node influence while effectively suppressing the exposure and amplification of sensitive attributes. Extensive experiments on multiple real-world graph datasets demonstrate that FairGU significantly outperforms current graph unlearning and fairness-enhancing baselines, achieving substantial improvements in fairness metrics without sacrificing model utility.

Technology Category

Application Category

📝 Abstract
Graph unlearning has emerged as a critical mechanism for supporting sustainable and privacy-preserving social networks, enabling models to remove the influence of deleted nodes and thereby better safeguard user information. However, we observe that existing graph unlearning techniques insufficiently protect sensitive attributes, often leading to degraded algorithmic fairness compared with traditional graph learning methods. To address this gap, we introduce FairGU, a fairness-aware graph unlearning framework designed to preserve both utility and fairness during the unlearning process. FairGU integrates a dedicated fairness-aware module with effective data protection strategies, ensuring that sensitive attributes are neither inadvertently amplified nor structurally exposed when nodes are removed. Through extensive experiments on multiple real-world datasets, we demonstrate that FairGU consistently outperforms state-of-the-art graph unlearning methods and fairness-enhanced graph learning baselines in terms of both accuracy and fairness metrics. Our findings highlight a previously overlooked risk in current unlearning practices and establish FairGU as a robust and equitable solution for the next generation of socially sustainable networked systems. The codes are available at https://github.com/LuoRenqiang/FairGU.
Problem

Research questions and friction points this paper is trying to address.

graph unlearning
algorithmic fairness
sensitive attributes
social networks
privacy preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

fairness-aware
graph unlearning
sensitive attributes
social networks
privacy-preserving