SaGIF: Improving Individual Fairness in Graph Neural Networks via Similarity Encoding

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph neural networks (GNNs) lack individual fairness—i.e., similar nodes should receive similar predictions—due to two key bottlenecks: unclear sources of unfairness and incomplete modeling of node similarity. Method: We propose a similarity-encoding-based fairness-enhancement framework. Its core innovation is the formal definition of a “similarity consistency” principle, coupled with a dual-path similarity metric that jointly encodes topological structure and node features, enabling learning of fairness-aware, node-wise representations. Crucially, fairness constraints are explicitly decoupled from the prediction task while preserving model utility. Results: Extensive experiments on multiple real-world graph datasets demonstrate that our method significantly outperforms state-of-the-art fair GNNs, achieving simultaneous improvements in individual fairness and predictive performance stability.

Technology Category

Application Category

📝 Abstract
Individual fairness (IF) in graph neural networks (GNNs), which emphasizes the need for similar individuals should receive similar outcomes from GNNs, has been a critical issue. Despite its importance, research in this area has been largely unexplored in terms of (1) a clear understanding of what induces individual unfairness in GNNs and (2) a comprehensive consideration of identifying similar individuals. To bridge these gaps, we conduct a preliminary analysis to explore the underlying reason for individual unfairness and observe correlations between IF and similarity consistency, a concept introduced to evaluate the discrepancy in identifying similar individuals based on graph structure versus node features. Inspired by our observations, we introduce two metrics to assess individual similarity from two distinct perspectives: topology fusion and feature fusion. Building upon these metrics, we propose Similarity-aware GNNs for Individual Fairness, named SaGIF. The key insight behind SaGIF is the integration of individual similarities by independently learning similarity representations, leading to an improvement of IF in GNNs. Our experiments on several real-world datasets validate the effectiveness of our proposed metrics and SaGIF. Specifically, SaGIF consistently outperforms state-of-the-art IF methods while maintaining utility performance. Code is available at: https://github.com/ZzoomD/SaGIF.
Problem

Research questions and friction points this paper is trying to address.

Understanding causes of individual unfairness in GNNs
Identifying similar individuals comprehensively in GNNs
Improving individual fairness via similarity-aware representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Similarity encoding for individual fairness
Topology and feature fusion metrics
Independent similarity representation learning