Towards Effective, Stealthy, and Persistent Backdoor Attacks Targeting Graph Foundation Models

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Foundation Models (GFMs) are vulnerable to backdoor attacks during pretraining, yet existing methods face bottlenecks in effectiveness (due to unknown downstream tasks), stealthiness (owing to cross-domain feature heterogeneity), and persistence (as fine-tuning easily erases backdoors). This paper proposes the first robust backdoor attack framework tailored for GFM pretraining. It introduces a label-free trigger association module to enable task-agnostic backdoor injection; designs a node-adaptive trigger generator that integrates prototype embeddings and autoencoders to enhance cross-domain stealth; and incorporates a persistence anchoring mechanism that identifies parameter-sensitivity-invariant subspaces to harden backdoors against fine-tuning. Extensive experiments demonstrate that our method achieves significantly higher attack success rates across diverse GFMs and downstream tasks, while maintaining low detectability and strong resilience to fine-tuning.

Technology Category

Application Category

📝 Abstract
Graph Foundation Models (GFMs) are pre-trained on diverse source domains and adapted to unseen targets, enabling broad generalization for graph machine learning. Despite that GFMs have attracted considerable attention recently, their vulnerability to backdoor attacks remains largely underexplored. A compromised GFM can introduce backdoor behaviors into downstream applications, posing serious security risks. However, launching backdoor attacks against GFMs is non-trivial due to three key challenges. (1) Effectiveness: Attackers lack knowledge of the downstream task during pre-training, complicating the assurance that triggers reliably induce misclassifications into desired classes. (2) Stealthiness: The variability in node features across domains complicates trigger insertion that remains stealthy. (3) Persistence: Downstream fine-tuning may erase backdoor behaviors by updating model parameters. To address these challenges, we propose GFM-BA, a novel Backdoor Attack model against Graph Foundation Models. Specifically, we first design a label-free trigger association module that links the trigger to a set of prototype embeddings, eliminating the need for knowledge about downstream tasks to perform backdoor injection. Then, we introduce a node-adaptive trigger generator, dynamically producing node-specific triggers, reducing the risk of trigger detection while reliably activating the backdoor. Lastly, we develop a persistent backdoor anchoring module that firmly anchors the backdoor to fine-tuning-insensitive parameters, enhancing the persistence of the backdoor under downstream adaptation. Extensive experiments demonstrate the effectiveness, stealthiness, and persistence of GFM-BA.
Problem

Research questions and friction points this paper is trying to address.

Addressing backdoor attack challenges in Graph Foundation Models
Ensuring stealthy trigger insertion across diverse graph domains
Maintaining persistent backdoor behavior during downstream fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Label-free trigger association with prototype embeddings
Node-adaptive trigger generator for stealthy activation
Persistent backdoor anchoring in fine-tuning-insensitive parameters
🔎 Similar Papers
No similar papers found.