CITED: A Decision Boundary-Aware Signature for GNNs Towards Model Extraction Defense

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of Graph Neural Networks (GNNs) in Machine Learning as a Service (MLaaS) settings to model extraction attacks, wherein adversaries construct functionally similar surrogate models through query access. To counter this threat, the paper introduces CITED, a novel ownership verification framework that, for the first time, embeds decision boundary-aware signatures simultaneously at both the embedding and label layers. Notably, CITED operates without requiring auxiliary models and incurs no degradation in downstream task performance. Empirical evaluations demonstrate that CITED substantially outperforms existing watermarking and fingerprinting approaches, achieving superior defense efficacy, robustness against adaptive attacks, and model efficiency.

Technology Category

Application Category

📝 Abstract
Graph neural networks (GNNs) have demonstrated superior performance in various applications, such as recommendation systems and financial risk management. However, deploying large-scale GNN models locally is particularly challenging for users, as it requires significant computational resources and extensive property data. Consequently, Machine Learning as a Service (MLaaS) has become increasingly popular, offering a convenient way to deploy and access various models, including GNNs. However, an emerging threat known as Model Extraction Attacks (MEAs) presents significant risks, as adversaries can readily obtain surrogate GNN models exhibiting similar functionality. Specifically, attackers repeatedly query the target model using subgraph inputs to collect corresponding responses. These input-output pairs are subsequently utilized to train their own surrogate models at minimal cost. Many techniques have been proposed to defend against MEAs, but most are limited to specific output levels (e.g., embedding or label) and suffer from inherent technical drawbacks. To address these limitations, we propose a novel ownership verification framework CITED which is a first-of-its-kind method to achieve ownership verification on both embedding and label levels. Moreover, CITED is a novel signature-based method that neither harms downstream performance nor introduces auxiliary models that reduce efficiency, while still outperforming all watermarking and fingerprinting approaches. Extensive experiments demonstrate the effectiveness and robustness of our CITED framework. Code is available at: https://github.com/LabRAI/CITED.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Model Extraction Attacks
Machine Learning as a Service
Ownership Verification
Decision Boundary
Innovation

Methods, ideas, or system contributions that make the work stand out.

model extraction defense
graph neural networks
ownership verification
decision boundary-aware signature
MLaaS security
Bolin Shen
Bolin Shen
Florida State University
Graph LearningData Mining
M
Md Shamim Seraj
Department of Computer Science, Florida State University, Tallahassee, Florida, United States
Z
Zhan Cheng
Department of Mathematics, University of Wisconsin, Madison, Wisconsin, United States
Shayok Chakraborty
Shayok Chakraborty
Researcher
Machine LearningComputer Vision
Yushun Dong
Yushun Dong
Assistant Professor, Department of Computer Science, Florida State University
AI SecurityAI IntegrityGraph Machine LearningLLMs