🤖 AI Summary
To address data privacy risks and high computational costs in cloud-based Graph Neural Network (GNN) training and inference, this paper proposes the first privacy-preserving computing framework integrating a four-party asynchronous secure array access protocol with a node-neighborhood random padding mechanism. The framework ensures end-to-end security while significantly reducing computational and communication overhead. Experiments demonstrate that, compared to state-of-the-art methods, it achieves average reductions of 75.28% in training time and 82.80% in inference time, along with 52.61% and 50.26% decreases in communication volume, respectively; estimated cost savings on Google Cloud reach 55.05% and 59.00%. The core contribution lies in the first joint design of asynchronous multi-party secure computation and lightweight privacy protection tailored for structured graph data—effectively balancing security guarantees, computational efficiency, and practical deployability.
📝 Abstract
Graph Neural Networks (GNNs) have marked significant impact in traffic state prediction, social recommendation, knowledge-aware question answering and so on. As more and more users move towards cloud computing, it has become a critical issue to unleash the power of GNNs while protecting the privacy in cloud environments. Specifically, the training data and inference data for GNNs need to be protected from being stolen by external adversaries. Meanwhile, the financial cost of cloud computing is another primary concern for users. Therefore, although existing studies have proposed privacy-preserving techniques for GNNs in cloud environments, their additional computational and communication overhead remain relatively high, causing high financial costs that limit their widespread adoption among users.
To protect GNN privacy while lowering the additional financial costs, we introduce Panther, a cost-effective privacy-preserving framework for GNN training and inference services in cloud environments. Technically, Panther leverages four-party computation to asynchronously executing the secure array access protocol, and randomly pads the neighbor information of GNN nodes. We prove that Panther can protect privacy for both training and inference of GNN models. Our evaluation shows that Panther reduces the training and inference time by an average of 75.28% and 82.80%, respectively, and communication overhead by an average of 52.61% and 50.26% compared with the state-of-the-art, which is estimated to save an average of 55.05% and 59.00% in financial costs (based on on-demand pricing model) for the GNN training and inference process on Google Cloud Platform.