Sharpness-aware Federated Graph Learning

๐Ÿ“… 2025-12-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In federated graph learning (FGL), data heterogeneity poses two key challenges: (1) local models easily converge to sharp minima, degrading generalization; and (2) representation dimension collapse impairs GNN classification performance. To address these, we propose SharpCorrโ€”the first framework jointly optimizing loss landscape sharpness and local representation correlation. Its core contributions are: (1) a sharpness-aware optimization objective that explicitly regularizes local model sharpness; and (2) a correlation-matrix-based regularization term that mitigates representation dimension collapse. Evaluated on multiple graph classification benchmarks, SharpCorr consistently outperforms state-of-the-art FGL methods, achieving significant and stable gains in both classification accuracy and cross-distribution generalization. Notably, its performance improvement remains robust as the number of clients increases.

Technology Category

Application Category

๐Ÿ“ Abstract
One of many impediments to applying graph neural networks (GNNs) to large-scale real-world graph data is the challenge of centralized training, which requires aggregating data from different organizations, raising privacy concerns. Federated graph learning (FGL) addresses this by enabling collaborative GNN model training without sharing private data. However, a core challenge in FGL systems is the variation in local training data distributions among clients, known as the data heterogeneity problem. Most existing solutions suffer from two problems: (1) The typical optimizer based on empirical risk minimization tends to cause local models to fall into sharp valleys and weakens their generalization to out-of-distribution graph data. (2) The prevalent dimensional collapse in the learned representations of local graph data has an adverse impact on the classification capacity of the GNN model. To this end, we formulate a novel optimization objective that is aware of the sharpness (i.e., the curvature of the loss surface) of local GNN models. By minimizing the loss function and its sharpness simultaneously, we seek out model parameters in a flat region with uniformly low loss values, thus improving the generalization over heterogeneous data. By introducing a regularizer based on the correlation matrix of local representations, we relax the correlations of representations generated by individual local graph samples, so as to alleviate the dimensional collapse of the learned model. The proposed extbf{S}harpness-aware f extbf{E}derated gr extbf{A}ph extbf{L}earning (SEAL) algorithm can enhance the classification accuracy and generalization ability of local GNN models in federated graph learning. Experimental studies on several graph classification benchmarks show that SEAL consistently outperforms SOTA FGL baselines and provides gains for more participants.
Problem

Research questions and friction points this paper is trying to address.

Addresses data heterogeneity in federated graph learning
Mitigates sharp valleys in loss surfaces for better generalization
Alleviates dimensional collapse in learned graph representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sharpness-aware optimization for flat loss surfaces
Regularizer to reduce representation correlation
Improves generalization in federated graph learning
๐Ÿ”Ž Similar Papers
No similar papers found.