🤖 AI Summary
Existing personalized Federated Graph Learning (pFGL) suffers from high communication overhead, low efficiency, and elevated privacy risks, while single-round federated learning (OFL) fails to accommodate graph-structured data and personalization requirements. To address these challenges, this paper proposes O-pFGL—the first personalized federated graph learning framework operating in a single communication round. Its core contributions are: (1) constructing a global pseudo-graph via class-level feature distribution estimation to mitigate graph heterogeneity; (2) introducing a two-stage adaptive local fine-tuning mechanism to substantially alleviate minority-class bias; and (3) seamlessly integrating Secure Aggregation to ensure end-to-end privacy protection. Extensive experiments across 12 multi-scale graph datasets demonstrate that O-pFGL consistently outperforms state-of-the-art methods in communication efficiency, personalized model performance, and minority-class generalization capability.
📝 Abstract
Federated Graph Learning (FGL) has emerged as a promising paradigm for breaking data silos among distributed private graphs. In practical scenarios involving heterogeneous distributed graph data, personalized Federated Graph Learning (pFGL) aims to enhance model utility by training personalized models tailored to client needs. However, existing pFGL methods often require numerous communication rounds under heterogeneous graphs, leading to significant communication overhead and security concerns. While One-shot Federated Learning (OFL) enables collaboration in a single round, existing OFL methods are designed for image-centric tasks and ineffective for graph data, leaving a critical gap in the field. Additionally, personalized models derived from existing methods suffer from bias, failing to effectively generalize to the minority. To address these challenges, we propose the first $ extbf{O}$ne-shot $ extbf{p}$ersonalized $ extbf{F}$ederated $ extbf{G}$raph $ extbf{L}$earning method ($ extbf{O-pFGL}$) for node classification, compatible with Secure Aggregation protocols for privacy preservation. Specifically, for effective graph learning in one communication round, our method estimates and aggregates class-wise feature distribution statistics to construct a global pseudo-graph on the server, facilitating the training of a global graph model. To mitigate bias, we introduce a two-stage personalized training approach that adaptively balances local personal information and global insights from the pseudo-graph, improving both personalization and generalization. Extensive experiments on 12 multi-scale graph datasets demonstrate that our method significantly outperforms state-of-the-art baselines across various settings.