🤖 AI Summary
To address the challenge of jointly modeling topological structure and semantic information in decentralized federated graph learning, this paper proposes a dual-topology adaptive communication mechanism. Under a decentralized architecture, it constructs two complementary communication topologies—semantic-aware and structure-aware—and dynamically optimizes neighbor selection and model aggregation based on structural heterogeneity of local subgraphs. This is the first approach to achieve joint structural-semantic modeling of graph data without a central server, integrating graph neural networks, distributed optimization, and topology-adaptive techniques. Extensive experiments across eight real-world graph datasets demonstrate that the method achieves an average accuracy improvement of 3.26% over state-of-the-art baselines, significantly enhancing generalization capability and communication efficiency—particularly under heterogeneous subgraph settings.
📝 Abstract
Decentralized Federated Learning (DFL) has emerged as a robust distributed paradigm that circumvents the single-point-of-failure and communication bottleneck risks of centralized architectures. However, a significant challenge arises as existing DFL optimization strategies, primarily designed for tasks such as computer vision, fail to address the unique topological information inherent in the local subgraph. Notably, while Federated Graph Learning (FGL) is tailored for graph data, it is predominantly implemented in a centralized server-client model, failing to leverage the benefits of decentralization.To bridge this gap, we propose DFed-SST, a decentralized federated graph learning framework with adaptive communication. The core of our method is a dual-topology adaptive communication mechanism that leverages the unique topological features of each client's local subgraph to dynamically construct and optimize the inter-client communication topology. This allows our framework to guide model aggregation efficiently in the face of heterogeneity. Extensive experiments on eight real-world datasets consistently demonstrate the superiority of DFed-SST, achieving 3.26% improvement in average accuracy over baseline methods.