🤖 AI Summary
Existing graph federated learning (GFL) methods for cross-client subgraph interconnection suffer from privacy risks due to node embedding leakage or face scalability bottlenecks from computationally expensive operations.
Method: We propose FedLap, the first GFL framework enabling strong privacy guarantees without transmitting sensitive node embeddings. FedLap leverages spectral-domain Laplacian smoothing to fuse global topological structure and model cross-subgraph node dependencies in the frequency domain—eliminating explicit embedding exchange and dense graph convolutions. It supports fully decentralized training with low communication overhead and provides theoretical privacy security.
Results: Extensive experiments on multiple benchmark datasets demonstrate that FedLap achieves performance comparable to or better than state-of-the-art methods under strict privacy constraints, while significantly reducing communication and computational costs.
📝 Abstract
We consider the problem of federated learning (FL) with graph-structured data distributed across multiple clients. In particular, we address the prevalent scenario of interconnected subgraphs, where interconnections between clients significantly influence the learning process. Existing approaches suffer from critical limitations, either requiring the exchange of sensitive node embeddings, thereby posing privacy risks, or relying on computationally-intensive steps, which hinders scalability. To tackle these challenges, we propose FedLap, a novel framework that leverages global structure information via Laplacian smoothing in the spectral domain to effectively capture inter-node dependencies while ensuring privacy and scalability. We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy. Notably, FedLap is the first subgraph FL scheme with strong privacy guarantees. Extensive experiments on benchmark datasets demonstrate that FedLap achieves competitive or superior utility compared to existing techniques.