🤖 AI Summary
Federated Graph Learning (FGL) often prioritizes node classification accuracy while neglecting fairness—leading to weak identification of minority-class nodes and topological bias induced by heterogeneous inter-client connections, thereby degrading performance for marginalized nodes. This paper presents the first systematic formulation of dual fairness objectives in FGL: class-level fairness and topology-aware fairness, and proposes a Collaborative Fair Learning framework. Methodologically, it introduces (1) client-side modules—including history-preserving training, majority-class alignment, and gradient correction—to mitigate local bias; and (2) server-side mechanisms—clustering-weighted aggregation and conflict-aware update reconciliation—to harmonize divergent fairness constraints across clients. Evaluated on eight benchmark datasets, the framework achieves up to a 22.62% improvement in Macro-F1, significantly enhancing minority-class node representation learning while simultaneously improving overall accuracy and accelerating convergence.
📝 Abstract
Federated Graph Learning (FGL) enables privacy-preserving, distributed training of graph neural networks without sharing raw data. Among its approaches, subgraph-FL has become the dominant paradigm, with most work focused on improving overall node classification accuracy. However, these methods often overlook fairness due to the complexity of node features, labels, and graph structures. In particular, they perform poorly on nodes with disadvantaged properties, such as being in the minority class within subgraphs or having heterophilous connections (neighbors with dissimilar labels or misleading features). This reveals a critical issue: high accuracy can mask degraded performance on structurally or semantically marginalized nodes. To address this, we advocate for two fairness goals: (1) improving representation of minority class nodes for class-wise fairness and (2) mitigating topological bias from heterophilous connections for topology-aware fairness. We propose FairFGL, a novel framework that enhances fairness through fine-grained graph mining and collaborative learning. On the client side, the History-Preserving Module prevents overfitting to dominant local classes, while the Majority Alignment Module refines representations of heterophilous majority-class nodes. The Gradient Modification Module transfers minority-class knowledge from structurally favorable clients to improve fairness. On the server side, FairFGL uploads only the most influenced subset of parameters to reduce communication costs and better reflect local distributions. A cluster-based aggregation strategy reconciles conflicting updates and curbs global majority dominance . Extensive evaluations on eight benchmarks show FairFGL significantly improves minority-group performance , achieving up to a 22.62 percent Macro-F1 gain while enhancing convergence over state-of-the-art baselines.