Effective backdoor attack on graph neural networks in link prediction tasks

πŸ“… 2024-01-05
πŸ“ˆ Citations: 3
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
This work identifies a previously underexplored backdoor vulnerability in Graph Neural Networks (GNNs) for link prediction. To address this, we propose the first single-node trigger-based backdoor attack specifically designed for link prediction: during training, lightweight, topology-agnostic triggers are injected into selected node pairs to induce targeted misprediction on non-adjacent node pairs. Crucially, the attack preserves the original graph structure, ensuring high stealthiness and low poisoning cost. It is fully integrated into an end-to-end GNN training framework with a customized data poisoning strategy. Extensive evaluations across multiple benchmark datasets demonstrate attack success rates exceeding 90%, while degrading the model’s original link prediction performance by less than 2%. These results rigorously validate the practical security risks posed by backdoor attacks on GNN-based link prediction systems.

Technology Category

Application Category

πŸ“ Abstract
Graph Neural Networks (GNNs) are a class of deep learning models capable of processing graph-structured data, and they have demonstrated significant performance in a variety of real-world applications. Recent studies have found that GNN models are vulnerable to backdoor attacks. When specific patterns (called backdoor triggers, e.g., subgraphs, nodes, etc.) appear in the input data, the backdoor embedded in the GNN models is activated, which misclassifies the input data into the target class label specified by the attacker, whereas when there are no backdoor triggers in the input, the backdoor embedded in the GNN models is not activated, and the models work normally. Backdoor attacks are highly stealthy and expose GNN models to serious security risks. Currently, research on backdoor attacks against GNNs mainly focus on tasks such as graph classification and node classification, and backdoor attacks against link prediction tasks are rarely studied. In this paper, we propose a backdoor attack against the link prediction tasks based on GNNs and reveal the existence of such security vulnerability in GNN models, which make the backdoored GNN models to incorrectly predict unlinked two nodes as having a link relationship when a trigger appear. The method uses a single node as the trigger and poison selected node pairs in the training graph, and then the backdoor will be embedded in the GNN models through the training process. In the inference stage, the backdoor in the GNN models can be activated by simply linking the trigger node to the two end nodes of the unlinked node pairs in the input data, causing the GNN models to produce incorrect link prediction results for the target node pairs.
Problem

Research questions and friction points this paper is trying to address.

Backdoor attacks on GNNs in link prediction tasks
Vulnerability of GNNs to stealthy trigger-based attacks
Single-node triggers causing incorrect link predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses single node as backdoor trigger
Poisons node pairs in training graph
Activates backdoor via trigger node linking
J
Jiazhu Dai
H
Haoyu Sun