Graph in the Vault: Protecting Edge GNN Inference with Trusted Execution Environment

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address model intellectual property leakage and graph data privacy risks in deploying Graph Neural Networks (GNNs) on edge devices, this paper proposes the first Trusted Execution Environment (TEE)-based inference framework for end-to-end GNN protection. Our method introduces a novel “pre-training model partitioning” architecture coupled with a lightweight private GNN corrector, enabling strict end-to-end isolation of model parameters and raw graph data within Intel SGX. This design is the first to fully support the entire GNN inference pipeline—including neighborhood aggregation and feature transformation—inside a TEE, balancing strong security guarantees with practical efficiency. It effectively mitigates state-of-the-art link stealing attacks while incurring less than 2% accuracy degradation. We validate the framework end-to-end on a real SGX platform, demonstrating feasibility and robustness for privacy-preserving GNN deployment at the edge.

Technology Category

Application Category

📝 Abstract
Wide deployment of machine learning models on edge devices has rendered the model intellectual property (IP) and data privacy vulnerable. We propose GNNVault, the first secure Graph Neural Network (GNN) deployment strategy based on Trusted Execution Environment (TEE). GNNVault follows the design of 'partition-before-training' and includes a private GNN rectifier to complement with a public backbone model. This way, both critical GNN model parameters and the private graph used during inference are protected within secure TEE compartments. Real-world implementations with Intel SGX demonstrate that GNNVault safeguards GNN inference against state-of-the-art link stealing attacks with negligible accuracy degradation (<2%).
Problem

Research questions and friction points this paper is trying to address.

Secure GNN deployment on edge devices
Protect model IP and data privacy
Inference safety against link stealing attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Trusted Execution Environment
Implements private GNN rectifier
Protects against link stealing attacks
🔎 Similar Papers
No similar papers found.
R
Ruyi Ding
Northeastern University
T
Tianhong Xu
Northeastern University
A
A. A. Ding
Northeastern University
Yunsi Fei
Yunsi Fei
Professor of Electrical and Computer Engineering, Northeastern University
hardware securityEDAcomputer architectureembedded systemsmachine learning systems