Who Owns This Sample: Cross-Client Membership Inference Attack in Federated Graph Neural Networks

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic study of cross-client membership inference attacks (CC-MIAs) against federated graph neural networks (FedGNNs)—where a malicious client infers the specific client to which a target node sample belongs, solely from model updates. To address the unique identity leakage risks arising from the tight coupling of graph topology, gradient dynamics, and aggregation mechanisms in FedGNNs, we propose a generic CC-MIA framework integrating three key components: modeling of federated aggregation bias, cross-round gradient pattern analysis, and embedding-space similarity measurement—enhanced by graph-topological features for improved attribution discrimination. Evaluated across multiple real-world graph datasets under heterogeneous federated settings, our attack achieves high client attribution accuracy (up to 92.3%), exposing fine-grained client identity leakage as a novel privacy threat in FedGNNs. This work establishes the first benchmark methodology and empirical foundation for security assessment of federated graph learning.

Technology Category

Application Category

📝 Abstract
Graph-structured data is prevalent in many real-world applications, including social networks, financial systems, and molecular biology. Graph Neural Networks (GNNs) have become the de facto standard for learning from such data due to their strong representation capabilities. As GNNs are increasingly deployed in federated learning (FL) settings to preserve data locality and privacy, new privacy threats arise from the interaction between graph structures and decentralized training. In this paper, we present the first systematic study of cross-client membership inference attacks (CC-MIA) against node classification tasks of federated GNNs (FedGNNs), where a malicious client aims to infer which client owns the given data. Unlike prior centralized-focused work that focuses on whether a sample was included in training, our attack targets sample-to-client attribution, a finer-grained privacy risk unique to federated settings. We design a general attack framework that exploits FedGNNs' aggregation behaviors, gradient updates, and embedding proximity to link samples to their source clients across training rounds. We evaluate our attack across multiple graph datasets under realistic FL setups. Results show that our method achieves high performance on both membership inference and ownership identification. Our findings highlight a new privacy threat in federated graph learning-client identity leakage through structural and model-level cues, motivating the need for attribution-robust GNN design.
Problem

Research questions and friction points this paper is trying to address.

Study cross-client membership inference attacks in federated GNNs
Identify sample-to-client attribution privacy risks in FedGNNs
Exploit aggregation behaviors and gradients for ownership identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits FedGNNs' aggregation behaviors
Utilizes gradient updates and embedding proximity
Links samples to source clients
🔎 Similar Papers
No similar papers found.
K
Kunhao Li
South China University of Technology, Guang Zhou
D
Di Wu
University of Southern Queensland, Queensland
Jun Bai
Jun Bai
Assistant professor
Computer aided drug discoveryMedical image analysisAI therapeutic target identification
J
Jing Xu
CISPA Helmholtz Center for Information Security, Saarbrücken
L
Lei Yang
South China University of Technology, Guang Zhou
Z
Ziyi Zhang
South China University of Technology, Guang Zhou
Yiliao Song
Yiliao Song
The University of Adelaide
Trustworthy Machine LearningHypothesis TestingConcept Drift
Wencheng Yang
Wencheng Yang
University of Southern Queensland
BiometricsPrivacy-Preserving AI
Taotao Cai
Taotao Cai
University of Southern Queensland
Y
Yan Li
University of Southern Queensland, Queensland