GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint

📅 2023-05-25
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks (GNNs) face challenges in simultaneously achieving individual fairness and high model utility, due to inconsistent similarity metrics, poor architectural adaptability, and high computational overhead. Method: This paper proposes GFairHint, a novel framework centered on the first-ever “fairness hint” mechanism: it learns fair representations via an auxiliary link prediction task and concatenates them with original node embeddings; it further introduces a joint optimization objective that unifies both internal (learned) and external (predefined) individual similarity measures. Contribution/Results: GFairHint is the first method to jointly achieve fairness–utility trade-off balance, compatibility with dual similarity definitions, cross-architecture generalizability across diverse GNNs, and low inference overhead. Extensive experiments on five real-world graph datasets, three mainstream GNN architectures, and two similarity settings demonstrate that GFairHint outperforms state-of-the-art methods in fairness while maintaining comparable utility and significantly reducing inference cost.
📝 Abstract
Given the growing concerns about fairness in machine learning and the impressive performance of Graph Neural Networks (GNNs) on graph data learning, algorithmic fairness in GNNs has attracted significant attention. While many existing studies improve fairness at the group level, only a few works promote individual fairness, which renders similar outcomes for similar individuals. A desirable framework that promotes individual fairness should (1) balance between fairness and performance, (2) accommodate two commonly-used individual similarity measures (externally annotated and computed from input features), (3) generalize across various GNN models, and (4) be computationally efficient. Unfortunately, none of the prior work achieves all the desirables. In this work, we propose a novel method, GFairHint, which promotes individual fairness in GNNs and achieves all aforementioned desirables. GFairHint learns fairness representations through an auxiliary link prediction task, and then concatenates the representations with the learned node embeddings in original GNNs as a"fairness hint". Through extensive experimental investigations on five real-world graph datasets under three prevalent GNN models covering both individual similarity measures above, GFairHint achieves the best fairness results in almost all combinations of datasets with various backbone models, while generating comparable utility results, with much less computational cost compared to the previous state-of-the-art (SoTA) method.
Problem

Research questions and friction points this paper is trying to address.

Fairness
Graph Neural Networks
Individual-level Fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Neural Networks
Fairness Enhancement
Efficient Computation
🔎 Similar Papers
No similar papers found.