Fairness-Aware Graph Representation Learning with Limited Demographic Information

πŸ“… 2025-11-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Graph neural networks (GNNs) often suffer from unfairness when sensitive attributes (e.g., race, gender) are partially missingβ€”a common yet underexplored scenario in fair representation learning. Method: We propose a proxy-driven fair graph representation learning framework. First, it constructs reliable proxy variables from available demographic information. Second, it introduces a cross-group consistency embedding mechanism coupled with an adaptive confidence-weighted strategy to dynamically balance fairness and predictive utility. Contribution/Results: Theoretically, we establish the first bias upper bound guarantee for fair graph learning under limited sensitive information. Empirically, on multiple benchmark datasets, our framework significantly reduces inter-group representation shift (average reduction: 32.7%) and prediction bias (average reduction: 28.4%), while maintaining state-of-the-art predictive performance. These results demonstrate its effectiveness, robustness, and generalizability.

Technology Category

Application Category

πŸ“ Abstract
Ensuring fairness in Graph Neural Networks is fundamental to promoting trustworthy and socially responsible machine learning systems. In response, numerous fair graph learning methods have been proposed in recent years. However, most of them assume full access to demographic information, a requirement rarely met in practice due to privacy, legal, or regulatory restrictions. To this end, this paper introduces a novel fair graph learning framework that mitigates bias in graph learning under limited demographic information. Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information and design a strategy that enforces consistent node embeddings across demographic groups. In addition, we develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility based on prediction confidence. We further provide theoretical analysis demonstrating that our framework, FairGLite, achieves provable upper bounds on group fairness metrics, offering formal guarantees for bias mitigation. Through extensive experiments on multiple datasets and fair graph learning frameworks, we demonstrate the framework's effectiveness in both mitigating bias and maintaining model utility.
Problem

Research questions and friction points this paper is trying to address.

Mitigating bias in graph learning with limited demographic data access
Generating demographic proxies and ensuring consistent node embeddings
Achieving group fairness guarantees while maintaining model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates demographic proxies using partial data
Enforces consistent embeddings across demographic groups
Dynamically adjusts fairness contributions via confidence strategy
πŸ”Ž Similar Papers
No similar papers found.