🤖 AI Summary
To address unfair predictions of Graph Neural Networks (GNNs) under biased data, this paper proposes the first subgroup distribution alignment–driven, three-stage fair learning framework—decoupling, augmentation, and debiasing. First, a variational autoencoder explicitly disentangles node attributes, graph topology, and multiple sources of bias. Second, contrastive augmentation enhances robustness of learned representations. Third, Wasserstein distance guides latent-space distribution alignment across subgroups, enabling end-to-end optimization under fairness constraints such as Equalized Odds. Evaluated on five benchmark datasets, our method consistently outperforms ten state-of-the-art approaches, achieving superior trade-offs between accuracy and fairness. To the best of our knowledge, this is the first work to realize fine-grained fair representation learning in GNNs via explicit distribution alignment—marking a significant advance in principled, distribution-aware fairness for graph representation learning.
📝 Abstract
Graph Neural Networks (GNNs) have become essential tools for graph representation learning in various domains, such as social media and healthcare. However, they often suffer from fairness issues due to inherent biases in node attributes and graph structure, leading to unfair predictions. To address these challenges, we propose a novel GNN framework, DAB-GNN, that Disentangles, Amplifies, and deBiases attribute, structure, and potential biases in the GNN mechanism. DAB-GNN employs a disentanglement and amplification module that isolates and amplifies each type of bias through specialized disentanglers, followed by a debiasing module that minimizes the distance between subgroup distributions. Extensive experiments on five datasets demonstrate that DAB-GNN significantly outperforms ten state-of-the-art competitors in terms of achieving an optimal balance between accuracy and fairness. The codebase of DAB-GNN is available at https://github.com/Bigdasgit/DAB-GNN