Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep-reasoning Retrieval-Augmented Generation (RAG) systems exhibit robustness against conventional knowledge poisoning attacks, posing challenges for security evaluation. Method: This paper proposes Chain-of-Thought Poisoning (CoTP), the first knowledge poisoning framework that explicitly models and exploits the chain-of-thought (CoT) reasoning pattern in RAG. CoTP extracts reasoning templates from a base R1-RAG system to generate semantically coherent and structurally aligned adversarial documents—designed to be misclassified as credible historical reasoning traces and subsequently incorporated into the model’s inference process. Leveraging cognitive biases in RAG’s alignment with training signals, CoTP achieves high stealthiness. Contribution/Results: Evaluated on the MS MARCO benchmark, CoTP significantly increases attack success rates, evades state-of-the-art defenses, and uncovers novel security vulnerabilities introduced by deep-reasoning enhancements in RAG architectures.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) systems can effectively mitigate the hallucination problem of large language models (LLMs),but they also possess inherent vulnerabilities. Identifying these weaknesses before the large-scale real-world deployment of RAG systems is of great importance, as it lays the foundation for building more secure and robust RAG systems in the future. Existing adversarial attack methods typically exploit knowledge base poisoning to probe the vulnerabilities of RAG systems, which can effectively deceive standard RAG models. However, with the rapid advancement of deep reasoning capabilities in modern LLMs, previous approaches that merely inject incorrect knowledge are inadequate when attacking RAG systems equipped with deep reasoning abilities. Inspired by the deep thinking capabilities of LLMs, this paper extracts reasoning process templates from R1-based RAG systems, uses these templates to wrap erroneous knowledge into adversarial documents, and injects them into the knowledge base to attack RAG systems. The key idea of our approach is that adversarial documents, by simulating the chain-of-thought patterns aligned with the model's training signals, may be misinterpreted by the model as authentic historical reasoning processes, thus increasing their likelihood of being referenced. Experiments conducted on the MS MARCO passage ranking dataset demonstrate the effectiveness of our proposed method.
Problem

Research questions and friction points this paper is trying to address.

Identifies vulnerabilities in RAG systems to enhance security
Probes RAG weaknesses using chain-of-thought poisoning attacks
Tests adversarial documents on reasoning-capable RAG systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts reasoning templates from R1-based RAG systems
Wraps erroneous knowledge using reasoning templates
Injects adversarial documents to exploit model misinterpretation
🔎 Similar Papers
No similar papers found.
H
Hongru Song
CAS Key Lab of Network Data Science and Technology, ICT, CAS, University of Chinese Academy of Sciences, Beijing, China
Y
Yu-an Liu
CAS Key Lab of Network Data Science and Technology, ICT, CAS, University of Chinese Academy of Sciences, Beijing, China
Ruqing Zhang
Ruqing Zhang
Institute of Computing Technology, Chinese Academy of Sciences
Information RetrievalNatural Language ProcessingLarge Language Models
Jiafeng Guo
Jiafeng Guo
Professor, Institute of Computing Techonology, CAS
Information RetrievalMachine LearningText AnalysisNeuIR
Yixing Fan
Yixing Fan
ict
relevance rankingdeep learninginformation retrieval