🤖 AI Summary
Ensuring secure inference of third-party graph neural networks (GNNs) in cloud environments, where clients, cloud servers, and model owners are mutually untrusted and sensitive data—including input features, graph topology, model parameters, and intermediate representations—must remain confidential.
Method: We propose a distributed secure multi-party computation (SMPC) framework that requires no trusted third party. It leverages encrypted message passing and privacy-preserving feature transformation to jointly protect all sensitive assets across the three parties.
Contribution/Results: The framework supports an arbitrary number of participants and is provably secure against collusion attacks involving up to (P-1) parties. Theoretical analysis confirms strong security guarantees under standard cryptographic assumptions, while empirical evaluation demonstrates efficient forward inference, favorable scalability, and practical deployability—achieving robust privacy without compromising computational feasibility.
📝 Abstract
We present CryptGNN, a secure and effective inference solution for third-party graph neural network (GNN) models in the cloud, which are accessed by clients as ML as a service (MLaaS). The main novelty of CryptGNN is its secure message passing and feature transformation layers using distributed secure multi-party computation (SMPC) techniques. CryptGNN protects the client's input data and graph structure from the cloud provider and the third-party model owner, and it protects the model parameters from the cloud provider and the clients. CryptGNN works with any number of SMPC parties, does not require a trusted server, and is provably secure even if P-1 out of P parties in the cloud collude. Theoretical analysis and empirical experiments demonstrate the security and efficiency of CryptGNN.