🤖 AI Summary
To address model intellectual property leakage and graph data privacy risks in deploying Graph Neural Networks (GNNs) on edge devices, this paper proposes the first Trusted Execution Environment (TEE)-based inference framework for end-to-end GNN protection. Our method introduces a novel “pre-training model partitioning” architecture coupled with a lightweight private GNN corrector, enabling strict end-to-end isolation of model parameters and raw graph data within Intel SGX. This design is the first to fully support the entire GNN inference pipeline—including neighborhood aggregation and feature transformation—inside a TEE, balancing strong security guarantees with practical efficiency. It effectively mitigates state-of-the-art link stealing attacks while incurring less than 2% accuracy degradation. We validate the framework end-to-end on a real SGX platform, demonstrating feasibility and robustness for privacy-preserving GNN deployment at the edge.
📝 Abstract
Wide deployment of machine learning models on edge devices has rendered the model intellectual property (IP) and data privacy vulnerable. We propose GNNVault, the first secure Graph Neural Network (GNN) deployment strategy based on Trusted Execution Environment (TEE). GNNVault follows the design of 'partition-before-training' and includes a private GNN rectifier to complement with a public backbone model. This way, both critical GNN model parameters and the private graph used during inference are protected within secure TEE compartments. Real-world implementations with Intel SGX demonstrate that GNNVault safeguards GNN inference against state-of-the-art link stealing attacks with negligible accuracy degradation (<2%).