FairGSE: Fairness-Aware Graph Neural Network without High False Positive Rates

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness-aware graph neural networks (GNNs) over-optimize statistical parity or equal opportunity, severely compromising negative-class prediction capability and yielding unacceptably high false positive rates (FPR)—a critical limitation for high-stakes applications. This work is the first to identify and characterize this systematic FPR degradation in fairness-aware GNNs. We propose a fine-grained calibration paradigm that jointly optimizes fairness and negative-class discriminability. Our core innovation introduces two-dimensional structural entropy (2D-SE) as a unified objective, enabling simultaneous modeling of structural robustness, fairness constraints, and classification performance within the GNN framework. Extensive experiments on multiple real-world graph datasets demonstrate that our method reduces FPR by 39% on average compared to state-of-the-art fairness-aware GNNs, while preserving competitive fairness improvements—effectively reconciling fairness with reliable negative-class prediction.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) have emerged as the mainstream paradigm for graph representation learning due to their effective message aggregation. However, this advantage also amplifies biases inherent in graph topology, raising fairness concerns. Existing fairness-aware GNNs provide satisfactory performance on fairness metrics such as Statistical Parity and Equal Opportunity while maintaining acceptable accuracy trade-offs. Unfortunately, we observe that this pursuit of fairness metrics neglects the GNN's ability to predict negative labels, which renders their predictions with extremely high False Positive Rates (FPR), resulting in negative effects in high-risk scenarios. To this end, we advocate that classification performance should be carefully calibrated while improving fairness, rather than simply constraining accuracy loss. Furthermore, we propose Fair GNN via Structural Entropy ( extbf{FairGSE}), a novel framework that maximizes two-dimensional structural entropy (2D-SE) to improve fairness without neglecting false positives. Experiments on several real-world datasets show FairGSE reduces FPR by 39% vs. state-of-the-art fairness-aware GNNs, with comparable fairness improvement.
Problem

Research questions and friction points this paper is trying to address.

Addresses high false positive rates in fairness-aware graph neural networks
Improves fairness without sacrificing classification performance calibration
Reduces false positive rates while maintaining comparable fairness improvements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses two-dimensional structural entropy for fairness
Reduces false positive rates in graph neural networks
Maintains fairness without sacrificing classification performance
🔎 Similar Papers
No similar papers found.
Z
Zhenqiang Ye
College of Cyber Security, Jinan University; Engineering Research Center of Trustworthy AI (Ministry of Education)
J
Jinjie Lu
College of Cyber Security, Jinan University; Engineering Research Center of Trustworthy AI (Ministry of Education)
Tianlong Gu
Tianlong Gu
Professor, Jinan University,Guangzhou
Trustworthy AIEthically Aligned DesignData Governance
F
Fengrui Hao
College of Cyber Security, Jinan University; Engineering Research Center of Trustworthy AI (Ministry of Education)
X
Xuemin Wang
Guangxi Key Laboratory of Trusted Software