Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?

πŸ“… 2024-08-16
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the poor robustness of Graph Neural Networks (GNNs) under topological perturbation attacks, this paper proposes LLM4RGNNβ€”a novel framework that systematically integrates lightweight local Large Language Models (LLMs) to enhance graph structure repair. Methodologically, it introduces a dual-module协同 mechanism: an LLM-based malicious edge detector precisely identifies adversarially modified edges, while an LLM-driven edge predictor reconstructs critical missing connections, enabling robust adversarial graph reconstruction. Its key contribution lies in departing from conventional node-feature-dependent robustness paradigms by deeply embedding LLMs’ reasoning and generative capabilities into topological repair. Experiments demonstrate that under 40% perturbation rate, LLM4RGNN reduces average accuracy degradation across multiple GNN architectures by 23.1%, and in some cases even surpasses performance on the original clean graph.

Technology Category

Application Category

πŸ“ Abstract
Graph neural networks (GNNs) are vulnerable to adversarial attacks, especially for topology perturbations, and many methods that improve the robustness of GNNs have received considerable attention. Recently, we have witnessed the significant success of large language models (LLMs), leading many to explore the great potential of LLMs on GNNs. However, they mainly focus on improving the performance of GNNs by utilizing LLMs to enhance the node features. Therefore, we ask: Will the robustness of GNNs also be enhanced with the powerful understanding and inference capabilities of LLMs? By presenting the empirical results, we find that despite that LLMs can improve the robustness of GNNs, there is still an average decrease of 23.1% in accuracy, implying that the GNNs remain extremely vulnerable against topology attacks. Therefore, another question is how to extend the capabilities of LLMs on graph adversarial robustness. In this paper, we propose an LLM-based robust graph structure inference framework, LLM4RGNN, which distills the inference capabilities of GPT-4 into a local LLM for identifying malicious edges and an LM-based edge predictor for finding missing important edges, so as to recover a robust graph structure. Extensive experiments demonstrate that LLM4RGNN consistently improves the robustness across various GNNs. Even in some cases where the perturbation ratio increases to 40%, the accuracy of GNNs is still better than that on the clean graph. The source code can be found in https://github.com/zhongjian-zhang/LLM4RGNN.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Graph Neural Networks
Adversarial Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM4RGNN
Adversarial Robustness
Graph Neural Networks
πŸ”Ž Similar Papers
No similar papers found.