Safeguarding Graph Neural Networks against Topology Inference Attacks

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses an overlooked graph-level topological privacy risk in Graph Neural Networks (GNNs): while prior efforts focus on edge-level differential privacy, black-box attacks can effectively infer global graph structure. To mitigate this, we propose Private Graph Reconstruction (PGR), a meta-optimization framework that iteratively generates synthetic training graphs via a bilevel optimization process—preserving model accuracy while implicitly protecting topological privacy. Unlike explicit noise injection, PGR leverages meta-gradients to drive graph reconstruction, inherently resisting query-based Topology Inference Attacks (TIAs). Experiments across multiple benchmarks demonstrate that PGR reduces topological information leakage by 62%–89%, with only a marginal 0.3%–1.2% drop in model accuracy—significantly outperforming edge-level differential privacy baselines.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data. However, their widespread adoption has raised serious privacy concerns. While prior research has primarily focused on edge-level privacy, a critical yet underexplored threat lies in topology privacy - the confidentiality of the graph's overall structure. In this work, we present a comprehensive study on topology privacy risks in GNNs, revealing their vulnerability to graph-level inference attacks. To this end, we propose a suite of Topology Inference Attacks (TIAs) that can reconstruct the structure of a target training graph using only black-box access to a GNN model. Our findings show that GNNs are highly susceptible to these attacks, and that existing edge-level differential privacy mechanisms are insufficient as they either fail to mitigate the risk or severely compromise model accuracy. To address this challenge, we introduce Private Graph Reconstruction (PGR), a novel defense framework designed to protect topology privacy while maintaining model accuracy. PGR is formulated as a bi-level optimization problem, where a synthetic training graph is iteratively generated using meta-gradients, and the GNN model is concurrently updated based on the evolving graph. Extensive experiments demonstrate that PGR significantly reduces topology leakage with minimal impact on model accuracy. Our code is anonymously available at https://github.com/JeffffffFu/PGR.
Problem

Research questions and friction points this paper is trying to address.

Protecting GNNs from graph structure reconstruction attacks
Addressing topology privacy risks in graph neural networks
Mitigating graph-level inference attacks while preserving accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bi-level optimization for synthetic graph generation
Meta-gradients to iteratively reconstruct private topology
Defense framework maintaining GNN accuracy against attacks