Unveiling the Vulnerability of Graph-LLMs: An Interpretable Multi-Dimensional Adversarial Attack on TAGs

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph-LLMs exhibit dual adversarial vulnerability—both structural and textual—on Text-Attributed Graphs (TAGs); existing attacks model only one dimension, lacking coordinated perturbation mechanisms. To address this, we propose the first interpretable multi-dimensional adversarial attack framework that jointly optimizes graph topology perturbations and textual semantic edits. Crucially, it incorporates human-understandable perturbation constraints to enhance interpretability and real-world threat validity. The framework adopts a three-module cooperative architecture, enabling cross-model adversarial example generation and theoretical analysis. Extensive experiments across multiple benchmark datasets and state-of-the-art Graph-LLMs demonstrate that our method significantly improves attack success rate (average +12.7%) and stealthiness, while uniquely exposing deep semantic vulnerabilities in Graph-LLMs. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social network analysis to molecular chemistry. By integrating large language models (LLMs), text-attributed graphs (TAGs) enhance node representations with rich textual semantics, significantly boosting the expressive power of graph-based learning. However, this sophisticated synergy introduces critical vulnerabilities, as Graph-LLMs are susceptible to adversarial attacks on both their structural topology and textual attributes. Although specialized attack methods have been designed for each of these aspects, no work has yet unified them into a comprehensive approach. In this work, we propose the Interpretable Multi-Dimensional Graph Attack (IMDGA), a novel human-centric adversarial attack framework designed to orchestrate multi-level perturbations across both graph structure and textual features. IMDGA utilizes three tightly integrated modules to craft attacks that balance interpretability and impact, enabling a deeper understanding of Graph-LLM vulnerabilities. Through rigorous theoretical analysis and comprehensive empirical evaluations on diverse datasets and architectures, IMDGA demonstrates superior interpretability, attack effectiveness, stealthiness, and robustness compared to existing methods. By exposing critical weaknesses in TAG representation learning, this work uncovers a previously underexplored semantic dimension of vulnerability in Graph-LLMs, offering valuable insights for improving their resilience. Our code and resources are publicly available at https://anonymous.4open.science/r/IMDGA-7289.
Problem

Research questions and friction points this paper is trying to address.

Exposes Graph-LLM vulnerabilities to multi-dimensional adversarial attacks
Unifies structural and textual perturbations for comprehensive vulnerability analysis
Reveals semantic weaknesses in text-attributed graph representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes multi-dimensional adversarial attack on graph structure and text
Integrates three modules for interpretable and impactful perturbations
Exposes semantic vulnerabilities in Graph-LLMs through theoretical and empirical analysis
🔎 Similar Papers
No similar papers found.