Adversarial Attacks and Defenses on Graph-aware Large Language Models (LLMs)

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper systematically reveals the severe vulnerability of graph-aware large language models (e.g., LLAGA, GraphPrompter) to adversarial attacks in node classification tasks—including training-time poisoning, test-time evasion, and a novel node-injection attack that exploits LLAGA’s node-sequence template to inject malicious placeholder nodes. Method: We identify the graph encoding architecture as a critical security flaw and propose GALGUARD, an end-to-end defense framework integrating LLM feature correction with GNN structural robustness enhancement to jointly mitigate feature perturbations and topological manipulations. Contribution/Results: Experiments demonstrate that existing models suffer drastic performance degradation under stealthy attacks, whereas GALGUARD significantly improves robustness, restoring average classification accuracy by over 85% across diverse adversarial settings.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly integrated with graph-structured data for tasks like node classification, a domain traditionally dominated by Graph Neural Networks (GNNs). While this integration leverages rich relational information to improve task performance, their robustness against adversarial attacks remains unexplored. We take the first step to explore the vulnerabilities of graph-aware LLMs by leveraging existing adversarial attack methods tailored for graph-based models, including those for poisoning (training-time attacks) and evasion (test-time attacks), on two representative models, LLAGA (Chen et al. 2024) and GRAPHPROMPTER (Liu et al. 2024). Additionally, we discover a new attack surface for LLAGA where an attacker can inject malicious nodes as placeholders into the node sequence template to severely degrade its performance. Our systematic analysis reveals that certain design choices in graph encoding can enhance attack success, with specific findings that: (1) the node sequence template in LLAGA increases its vulnerability; (2) the GNN encoder used in GRAPHPROMPTER demonstrates greater robustness; and (3) both approaches remain susceptible to imperceptible feature perturbation attacks. Finally, we propose an end-to-end defense framework GALGUARD, that combines an LLM-based feature correction module to mitigate feature-level perturbations and adapted GNN defenses to protect against structural attacks.
Problem

Research questions and friction points this paper is trying to address.

Explores vulnerabilities of graph-aware LLMs to adversarial attacks
Identifies new attack surfaces in node sequence templates
Proposes defense framework against feature and structural attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging adversarial attacks on graph-aware LLMs
Discovering new attack surface via malicious nodes
Proposing GALGUARD defense with feature correction
🔎 Similar Papers
No similar papers found.