A Few Words Can Distort Graphs: Knowledge Poisoning Attacks on Graph-based Retrieval-Augmented Generation of Large Language Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a novel knowledge poisoning attack targeting the knowledge graph construction phase of GraphRAG systems—modifying fewer than 0.05% of keywords (e.g., pronouns or dependency-critical tokens) in source texts suffices to induce erroneous graph structures, causing downstream question-answering accuracy to plummet from 95% to 50%. Method: We propose two complementary attack paradigms: *targeted* (precisely manipulating answer outputs) and *generic* (globally corrupting graph topology), integrating graph-theoretic analysis, dependency parsing, and LLM-driven semantic rewriting to achieve low-perturbation, high-stealth adversarial manipulation. Contribution/Results: This work is the first systematic exposure of GraphRAG’s structural fragility under knowledge poisoning. Targeted attacks achieve 93.1% success rate, while existing defenses fail completely. Our findings provide critical theoretical insights and empirical evidence for security evaluation and robustness enhancement of graph-augmented generation systems.

Technology Category

Application Category

📝 Abstract
Graph-based Retrieval-Augmented Generation (GraphRAG) has recently emerged as a promising paradigm for enhancing large language models (LLMs) by converting raw text into structured knowledge graphs, improving both accuracy and explainability. However, GraphRAG relies on LLMs to extract knowledge from raw text during graph construction, and this process can be maliciously manipulated to implant misleading information. Targeting this attack surface, we propose two knowledge poisoning attacks (KPAs) and demonstrate that modifying only a few words in the source text can significantly change the constructed graph, poison the GraphRAG, and severely mislead downstream reasoning. The first attack, named Targeted KPA (TKPA), utilizes graph-theoretic analysis to locate vulnerable nodes in the generated graphs and rewrites the corresponding narratives with LLMs, achieving precise control over specific question-answering (QA) outcomes with a success rate of 93.1%, while keeping the poisoned text fluent and natural. The second attack, named Universal KPA (UKPA), exploits linguistic cues such as pronouns and dependency relations to disrupt the structural integrity of the generated graph by altering globally influential words. With fewer than 0.05% of full text modified, the QA accuracy collapses from 95% to 50%. Furthermore, experiments show that state-of-the-art defense methods fail to detect these attacks, highlighting that securing GraphRAG pipelines against knowledge poisoning remains largely unexplored.
Problem

Research questions and friction points this paper is trying to address.

Attackers manipulate text to poison GraphRAG knowledge graphs
Few word changes distort graphs and mislead LLM reasoning
Current defenses fail to detect sophisticated knowledge poisoning attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targeted KPA manipulates graphs via vulnerable nodes
Universal KPA disrupts graphs using linguistic cues
Both attacks modify minimal text for maximum impact
🔎 Similar Papers
No similar papers found.
Jiayi Wen
Jiayi Wen
PhD in Mathematics, University of California, San Diego
Mathematical ModelingMonte Carlo SimulationsVariational AnalysisNumerical Computation
T
Tianxin Chen
Fudan University, Shanghai, China
Z
Zhirun Zheng
Seoul National University, Seoul, South Korea
C
Cheng Huang
Fudan University, Shanghai, China