🤖 AI Summary
Prior research on retrieval-augmented generation (RAG) security focuses predominantly on unstructured text, overlooking the unique editability and structural constraints of knowledge graphs (KGs), leaving KG-RAG systems’ vulnerability to data poisoning attacks unexplored. Method: This paper introduces a stealthy, practically feasible knowledge poisoning attack paradigm tailored for KG-RAG: it identifies adversarial targets and injects perturbed triples into the KG to construct spurious reasoning paths, thereby precisely manipulating model outputs toward erroneous or harmful content. Contribution/Results: Extensive experiments across multiple benchmark datasets and state-of-the-art KG-RAG methods demonstrate that injecting fewer than 0.1% perturbed triples suffices to significantly degrade system performance. This work establishes the first systematic investigation into KG-RAG security under data poisoning, filling a critical gap in the literature and providing foundational insights for robustness evaluation and defense design in structured-knowledge-augmented systems.
📝 Abstract
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by retrieving external data to mitigate hallucinations and outdated knowledge issues. Benefiting from the strong ability in facilitating diverse data sources and supporting faithful reasoning, knowledge graphs (KGs) have been increasingly adopted in RAG systems, giving rise to KG-based RAG (KG-RAG) methods. Though RAG systems are widely applied in various applications, recent studies have also revealed its vulnerabilities to data poisoning attacks, where malicious information injected into external knowledge sources can mislead the system into producing incorrect or harmful responses. However, these studies focus exclusively on RAG systems using unstructured textual data sources, leaving the security risks of KG-RAG largely unexplored, despite the fact that KGs present unique vulnerabilities due to their structured and editable nature. In this work, we conduct the first systematic investigation of the security issue of KG-RAG methods through data poisoning attacks. To this end, we introduce a practical, stealthy attack setting that aligns with real-world implementation. We propose an attack strategy that first identifies adversarial target answers and then inserts perturbation triples to complete misleading inference chains in the KG, increasing the likelihood that KG-RAG methods retrieve and rely on these perturbations during generation. Through extensive experiments on two benchmarks and four recent KG-RAG methods, our attack strategy demonstrates strong effectiveness in degrading KG-RAG performance, even with minimal KG perturbations. In-depth analyses are also conducted to understand the safety threats within the internal stages of KG-RAG systems and to explore the robustness of LLMs against adversarial knowledge.