🤖 AI Summary
This work addresses the fairness issue in federated graph learning, where performance for underrepresented node groups significantly degrades due to three interrelated factors: label skew, topological confusion, and the dilution of updates from difficult clients. To tackle these root causes, the paper proposes BoostFGL, a novel framework that systematically identifies and jointly mitigates these challenges. It enhances local representation quality through client-side node and topology augmentation, and introduces a server-side model aggregation mechanism that is both difficulty-aware and reliability-aware, thereby optimizing fairness while preserving privacy. Extensive experiments across nine datasets demonstrate BoostFGL’s effectiveness, achieving an average 8.43% improvement in Overall-F1 without compromising overall performance compared to state-of-the-art methods.
📝 Abstract
Federated graph learning (FGL) enables collaborative training of graph neural networks (GNNs) across decentralized subgraphs without exposing raw data. While existing FGL methods often achieve high overall accuracy, we show that this average performance can conceal severe degradation on disadvantaged node groups. From a fairness perspective, these disparities arise systematically from three coupled sources: label skew toward majority patterns, topology confounding in message propagation, and aggregation dilution of updates from hard clients. To address this, we propose \textbf{BoostFGL}, a boosting-style framework for fairness-aware FGL. BoostFGL introduces three coordinated mechanisms: \ding{182} \emph{Client-side node boosting}, which reshapes local training signals to emphasize systematically under-served nodes; \ding{183} \emph{Client-side topology boosting}, which reallocates propagation emphasis toward reliable yet underused structures and attenuates misleading neighborhoods; and \ding{184} \emph{Server-side model boosting}, which performs difficulty- and reliability-aware aggregation to preserve informative updates from hard clients while stabilizing the global model. Extensive experiments on 9 datasets show that BoostFGL delivers substantial fairness gains, improving Overall-F1 by 8.43\%, while preserving competitive overall performance against strong FGL baselines.