Graph-Aware Text-Only Backdoor Poisoning for Text-Attributed Graphs

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security risk of backdoor attacks on textual attributed graphs, where adversaries can compromise graph neural networks by solely manipulating node text. To this end, the authors propose TAGBD, a novel attack method that operates without altering the graph structure. TAGBD employs a graph-aware strategy to identify vulnerable nodes and leverages a shadow graph model to generate semantically natural trigger texts, which are injected via replacement or appending. Notably, this approach constitutes the first demonstration that node text alone serves as an effective attack vector for backdoor injection in graph neural networks. Extensive experiments on three benchmark datasets show that TAGBD achieves high attack success rates, exhibits strong transferability across different models, and remains robust against mainstream defense mechanisms.

Technology Category

Application Category

📝 Abstract
Many learning systems now use graph data in which each node also contains text, such as papers with abstracts or users with posts. Because these texts often come from open platforms, an attacker may be able to quietly poison a small part of the training data and later make the model produce wrong predictions on demand. This paper studies that risk in a realistic setting where the attacker edits only node text and does not change the graph structure. We propose TAGBD, a text-only backdoor attack for text-attributed graphs. TAGBD first finds training nodes that are easier to influence, then generates natural-looking trigger text with the help of a shadow graph model, and finally injects the trigger by either replacing the original text or appending a short phrase. Experiments on three benchmark datasets show that the attack is highly effective, transfers across different graph models, and remains strong under common defenses. These results demonstrate that text alone is a practical attack channel in graph learning systems and suggest that future defenses should inspect both graph links and node content.
Problem

Research questions and friction points this paper is trying to address.

text-attributed graphs
backdoor poisoning
graph learning
adversarial attack
node text
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor attack
text-attributed graphs
graph neural networks
trigger generation
data poisoning
🔎 Similar Papers
No similar papers found.
Q
Qi Luo
School of Computer Science and Technology, Shandong University, Qingdao, China
M
Minghui Xu
School of Computer Science and Technology, Shandong University, Qingdao, China
Dongxiao Yu
Dongxiao Yu
Professor of Computer Science, Shandong University
Distributed ComputingWireless NetworkingGraph Algorithms
Xiuzhen Cheng
Xiuzhen Cheng
School of Computer Science and Technology, Shandong University
BlockchainIoT SecurityEdge ComputingDistributed Computing