FedIA: A Plug-and-Play Importance-Aware Gradient Pruning Aggregation Method for Domain-Robust Federated Graph Learning on Node Classification

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated Graph Learning (FGL) suffers from unstable aggregation and inefficient convergence under domain shifts (e.g., Twitch Gamers, multilingual Wikipedia) due to high gradient noise across clients. To address this, we propose a lightweight “project-then-aggregate” denoising framework: the server performs top-ρ coordinate pruning based on gradient importance and incorporates influence-regularized momentum to suppress anomalous updates—all without increasing uplink communication overhead or imposing measurable additional server memory cost. Crucially, we pioneer the integration of projection as a *pre-aggregation* gradient preprocessing step, enabling a plug-and-play, two-stage denoising pipeline. We theoretically establish that our method preserves the optimal convergence rate of 𝒪(σ²/√T). Extensive experiments on heterogeneous graph benchmarks demonstrate consistent superiority over nine strong baselines, achieving smoother convergence and higher node classification accuracy.

Technology Category

Application Category

📝 Abstract
Federated Graph Learning (FGL) under domain skew -- as observed on platforms such as emph{Twitch Gamers} and multilingual emph{Wikipedia} networks -- drives client models toward incompatible representations, rendering naive aggregation both unstable and ineffective. We find that the culprit is not the weighting scheme but the emph{noisy gradient signal}: empirical analysis of baseline methods suggests that a vast majority of gradient dimensions can be dominated by domain-specific variance. We therefore shift focus from "aggregation-first" to a emph{projection-first} strategy that denoises client updates emph{before} they are combined. The proposed FedIA framework realises this underline{I}mportance-underline{A}ware idea through a two-stage, plug-and-play pipeline: (i) a server-side top-$ρ$ mask keeps only the most informative about 5% of coordinates, and (ii) a lightweight influence-regularised momentum weight suppresses outlier clients. FedIA adds emph{no extra uplink traffic and only negligible server memory}, making it readily deployable. On both homogeneous (Twitch Gamers) and heterogeneous (Wikipedia) graphs, it yields smoother, more stable convergence and higher final accuracy than nine strong baselines. A convergence sketch further shows that dynamic projection maintains the optimal $mathcal{O}(σ^{2}/sqrt{T})$ rate.
Problem

Research questions and friction points this paper is trying to address.

Addresses unstable aggregation in federated graph learning with domain skew
Solves noisy gradient signals dominating domain-specific variance in client updates
Improves convergence stability and accuracy for node classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play importance-aware gradient pruning aggregation
Server-side top-ρ mask keeps most informative coordinates
Lightweight influence-regularised momentum weight suppresses outliers
🔎 Similar Papers
No similar papers found.
Z
Zhanting Zhou
School of Information and Software Engineering, University of Electronic Science and Technology of China
KaHou Tam
KaHou Tam
University of Macau
Federated LearningOn Device LearningMobile computing
Z
Zeqin Wu
School of Information and Software Engineering, University of Electronic Science and Technology of China
P
Pengzhao Sun
School of Information and Software Engineering, University of Electronic Science and Technology of China
Jinbo Wang
Jinbo Wang
Texas A & M University
Ocean dynamics
F
Fengli Zhang
School of Information and Software Engineering, University of Electronic Science and Technology of China