🤖 AI Summary
Existing fairness-aware graph neural networks (GNNs) over-optimize statistical parity or equal opportunity, severely compromising negative-class prediction capability and yielding unacceptably high false positive rates (FPR)—a critical limitation for high-stakes applications. This work is the first to identify and characterize this systematic FPR degradation in fairness-aware GNNs. We propose a fine-grained calibration paradigm that jointly optimizes fairness and negative-class discriminability. Our core innovation introduces two-dimensional structural entropy (2D-SE) as a unified objective, enabling simultaneous modeling of structural robustness, fairness constraints, and classification performance within the GNN framework. Extensive experiments on multiple real-world graph datasets demonstrate that our method reduces FPR by 39% on average compared to state-of-the-art fairness-aware GNNs, while preserving competitive fairness improvements—effectively reconciling fairness with reliable negative-class prediction.
📝 Abstract
Graph neural networks (GNNs) have emerged as the mainstream paradigm for graph representation learning due to their effective message aggregation. However, this advantage also amplifies biases inherent in graph topology, raising fairness concerns. Existing fairness-aware GNNs provide satisfactory performance on fairness metrics such as Statistical Parity and Equal Opportunity while maintaining acceptable accuracy trade-offs. Unfortunately, we observe that this pursuit of fairness metrics neglects the GNN's ability to predict negative labels, which renders their predictions with extremely high False Positive Rates (FPR), resulting in negative effects in high-risk scenarios. To this end, we advocate that classification performance should be carefully calibrated while improving fairness, rather than simply constraining accuracy loss. Furthermore, we propose Fair GNN via Structural Entropy ( extbf{FairGSE}), a novel framework that maximizes two-dimensional structural entropy (2D-SE) to improve fairness without neglecting false positives. Experiments on several real-world datasets show FairGSE reduces FPR by 39% vs. state-of-the-art fairness-aware GNNs, with comparable fairness improvement.